{"id":70622,"date":"2022-08-23T19:16:48","date_gmt":"2022-08-23T11:16:48","guid":{"rendered":"https:\/\/www.seeedstudio.com\/blog\/?p=70622"},"modified":"2022-08-29T13:56:56","modified_gmt":"2022-08-29T05:56:56","slug":"faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano","status":"publish","type":"post","link":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/","title":{"rendered":"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano!"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1030\" height=\"539\" src=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-1030x539.jpg\" alt=\"\" class=\"wp-image-70623\" srcset=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-1030x539.jpg 1030w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-300x157.jpg 300w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-768x402.jpg 768w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-1024x536.jpg 1024w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg 1200w\" sizes=\"(max-width: 1030px) 100vw, 1030px\" \/><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Why use TensorRT?<\/h2>\n\n\n\n<p>TensorRT-based applications perform up to 36x faster than CPU-only platforms during inference. It has a low response time of under 7ms and can perform target-specific optimizations. Thus enabling developers to optimize neural network models trained on all major frameworks, such as PyTorch, TensorFlow, ONNX, and Matlab, for faster inference. It can also be integrated with application-specific software development kits such as NVIDIA DeepStream, Riva, Merlin, Maxine, Modulus, Morpheus, and Broadcast Engine.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How does TensorRT perform optimization?<\/h2>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/DakMAJpPAxFyZuPPtXZx87IbLJqbTf0c9sE5uXyBWHRTLkWGvAMoJDGa2bj0fPP9p7FEY90_Y7AOBsRWydEMcofg_uuf0Ngiae78vrZWSC1bolByalB-S7AhvjtkxsZHcvGXArwJk628ZscNp_nOnBs\" alt=\"\"\/><figcaption>Source: <a href=\"https:\/\/developer.nvidia.com\/tensorrt\">NVIDIA<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>We have read about how TensorRT can help developers optimize, but now we will look at the six processes of TensorRT that can make it work.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1. Weight &amp; Activation Precision Calibration<\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/h61y5P5B-hN1Iqam-j8tX-18b0I10fi0NhpDsp_n2qNscP5c2Nd88o5VbLFRLg5t3QccYNI_lqvtQL_W1_KCBLgY0oDAAg0VcJPeuwMGsSRYVVGnNJQY3Pc85iTqMRMbY4RZSBAF3QqjXWBhjkNg2mQ\" alt=\"\"\/><figcaption>Source: <a href=\"https:\/\/on-demand.gputechconf.com\/gtc-cn\/2018\/pdf\/CH8212.pdf\">NVIDIA<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>Nearly all deep learning models are trained in FP32 to take advantage of a wider dynamic range. However, these models require a long predicting time, setting back real-time responses.&nbsp;<\/p>\n\n\n\n<p>In this process, model quantization converts the parameters and activations to FP16 or INT8. This will normally cause lower accuracy and a reduction in latency and model size. But by using <a href=\"https:\/\/en.wikipedia.org\/wiki\/Kullback%E2%80%93Leibler_divergence\">KL-divergence<\/a>, TensorRT is able to measure the difference and minimize it, thereby preserving accuracy while maximizing throughput. We can see the difference between FP32 and INT8\/FP16 from the picture above.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2. Layer &amp; Tensor Fusion<\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/QZAJP8G_GvUVd52_nZE3l21hiDgX7_N-u_IJIgn9IVEDJ5XtQrMXxIUXJxwERfeH1EvrFUKjk7rQXYupOoYARBNVd2tb7wqL4KSBdXcCoszI5lGUbjA39_KgtdmCMUehOUTgU0jEWW4n-1aHH7wdhzE\" alt=\"\"\/><figcaption>Source: <a href=\"https:\/\/developer.nvidia.com\/blog\/tensorrt-3-faster-tensorflow-inference\/#:~:text=Optimization%203%3A%20Kernel%20Auto%2DTuning&amp;text=TensorRT%20will%20pick%20the%20implementation,batch%20size%20and%20other%20parameters.\">NVIDIA<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>In this process, TensorRT uses layers and tensor fusion to optimize the GPU\u2019s memory and bandwidth by fusing nodes in a kernel vertically or horizontally (sometimes both). This reduces the overhead cost of reading and writing the tensor data for each layer.&nbsp;<\/p>\n\n\n\n<p>We can see from the picture above that TensorRT recognizes all layers with similar inputs and filter sizes and merges them to form a single layer. It also eliminates concatenation layers, as seen in the picture above (\u201cconcat\u201d).<\/p>\n\n\n\n<p>Overall, this will result in a smaller, faster, and more efficient graph with fewer layers and kernel launches, which will reduce inference latency.&nbsp;<\/p>\n\n\n\n<p>3. Kernel Auto-Tuning<\/p>\n\n\n\n<p>During this process, TensorRT selects the best layers, algorithms, and batch size based on the target GPU platform in order to find the best performance. This ensures that the deployed model is tuned for each deployment platform.&nbsp;<\/p>\n\n\n\n<p>4. Dynamic Tensor Memory<\/p>\n\n\n\n<p>For this process, TensorRT minimizes memory footprint and re-uses memory by allocating memory for each tensor only for the duration of its usage, avoiding any memory allocation overhead for faster and more efficient execution.&nbsp;<\/p>\n\n\n\n<p>5. Multi-Stream Execution<\/p>\n\n\n\n<p>TensorRT is designed to process multiple input streams in parallel during this process.<\/p>\n\n\n\n<p>6. Time Fusion<\/p>\n\n\n\n<p>For the last step before heading to the output stage, TensorRT is able to optimize recurrent neural networks over time steps with dynamically generated kernels.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What models can be converted to TensorRT<\/h2>\n\n\n\n<p>TensorRT officially supports the conversion of models such as Caffe, TensorFlow, PyTorch, and ONNX.<\/p>\n\n\n\n<p>It also provides three ways to convert models:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Integrate TensorRT in TensorFlow using <a href=\"https:\/\/docs.nvidia.com\/deeplearning\/frameworks\/tf-trt-user-guide\/index.html\">TF-TRT<\/a>.<\/li><li><a href=\"https:\/\/github.com\/NVIDIA-AI-IOT\/torch2trt\">torch2trt<\/a>: PyTorch to TensorRT converter, which utilizes the TensorRT Python API.<\/li><li>Construct the model structure, and then manually move the weight information, <a href=\"https:\/\/github.com\/wang-xinyu\/tensorrtx\">tensorrtx<\/a>: &nbsp;implement popular deep learning networks with TensorRT network definition APIs.&nbsp;<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">What does this mean for NVIDIA Jetson?<\/h2>\n\n\n\n<p><strong>Note:<\/strong>&nbsp;All models are run on&nbsp;<strong>FP32<\/strong>&nbsp;precision<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"432\" height=\"176\" src=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-25.png\" alt=\"\" class=\"wp-image-70836\" srcset=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-25.png 432w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-25-300x122.png 300w\" sizes=\"(max-width: 432px) 100vw, 432px\" \/><\/figure>\n\n\n\n<p><a href=\"https:\/\/www.seeedstudio.com\/tag\/nvidia.html\">NVIDIA Jetson<\/a> is a family of embedded devices with the ability to run AI at the edge. When you compare with traditional systems that can do AI inference, such as a desktop PC with a GPU, Jetson devices are very small. However, the performance is limited compared with those big systems. To get the best performance out of these Jetson systems, the implementation of TensorRT is very helpful. <\/p>\n\n\n\n<p>Now let us compare how much of a performance increase we can expect by using TensorRT on a Jetson device. As an example, we have run inference using YOLOv5 on a Jetson Nano device and checked the inference performance with and without TensorRT.<\/p>\n\n\n\n<p>For inference without TensorRT, we used <a href=\"https:\/\/github.com\/ultralytics\/yolov5\">ultralytics\/yolov5<\/a> repo with the yolov5n pre-trained model<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Step 1: Refer to step 1 &#8211; step 8 in <a href=\"https:\/\/wiki.seeedstudio.com\/YOLOv5-Object-Detection-Jetson\/#inference-on-jetson-device\"><strong>this wiki section<\/strong><\/a><\/li><li>Step 2: Connect a webcam to the Jetson device and run the following inside the YOLOv5 directory&nbsp;<\/li><\/ul>\n\n\n\n<p>python3 detect.py &#8211;source 0 &#8211;weights yolov5n.pt<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/5J7YYBvHZ2aDRdj0klp8ml6csmxPqlIKPA0ccKVcCEJx5e_amgeWAGuxz2vpfd_M3kL2AAQihXk5YWkE61ePD_BDTUO_Ud0V7tJmiAcUWFD7FOrcEHPvNldcB75vgfx78UELYvpSuySRnyATkSQBKjw\" alt=\"\"\/><\/figure>\n\n\n\n<p><strong>As you can see, the inference time is about 0.060s = 60ms, which is nearly 1000\/60 = 16.7fps<\/strong><\/p>\n\n\n\n<p>For inference with TensorRT, we used <a href=\"https:\/\/github.com\/ultralytics\/yolov5\">ultralytics\/yolov5<\/a> repo in combination with <a href=\"https:\/\/github.com\/wang-xinyu\/tensorrtx\">wang-xinyu\/tensorrtx<\/a> repo with the yolov5n pre-trained model<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Step 1: Refer to step 1 &#8211; step 20 in <a href=\"https:\/\/wiki.seeedstudio.com\/YOLOv5-Object-Detection-Jetson\/#inference-on-jetson-device\">this wiki section<\/a><\/li><li>Step 2: Run the following with the required images for inference loaded into \u201cimages\u201d directory<\/li><\/ul>\n\n\n\n<p>sudo .\/yolov5 -d yolov5n.engine images<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/Mc94srV915CsxrILUVEcPON1Mbud-izbCB6hlrrdRa9zEDA0uNYlepLdHjEKJlw2CSr3DXVn4mwOL7NUpZ4NVGcSFpSCqfwGPk7Ox81x69s0C4TBhDIKY7yrMjzhRZnNyjZF5y0L15-IT-n1nces2GA\" alt=\"\"\/><\/figure>\n\n\n\n<p><strong>As you can see, the inference time is about 0.037s = 37ms which is nearly 1000\/37 = 27fps<\/strong><\/p>\n\n\n\n<p>We have also run inference using YOLOv5n pre-trained model on a Jetson Xavier NX device and checked the inference performance with TensorRT. Here we used <a href=\"https:\/\/github.com\/ultralytics\/yolov5\">ultralytics\/yolov5<\/a> repo in combination with <a href=\"https:\/\/github.com\/marcoslucianops\/DeepStream-Yolo\">marcoslucianops\/DeepStream-Yolo<\/a> repo with the yolov5n pre-trained model<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Step 1: Refer to step 1 &#8211; step 10 in <a href=\"https:\/\/wiki.seeedstudio.com\/YOLOv5-Object-Detection-Jetson\/#using-tensorrt-and-deepstream-sdk\">this wiki section<\/a><\/li><li>Step 2: Run the following to view the inference<\/li><\/ul>\n\n\n\n<p>deepstream-app -c deepstream_app_config.txt<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/xavier-nx-yolov5n-640.png\" alt=\"\" class=\"wp-image-70811\" width=\"416\" height=\"205\" srcset=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/xavier-nx-yolov5n-640.png 889w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/xavier-nx-yolov5n-640-300x148.png 300w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/xavier-nx-yolov5n-640-768x379.png 768w\" sizes=\"(max-width: 416px) 100vw, 416px\" \/><\/figure>\n\n\n\n<p><strong>The above result is running on Jetson Xavier NX with FP32. We can see that the FPS is around 60.<\/strong><\/p>\n\n\n\n<p>So we can conclude that even on the Jetson platform if you use TensorRT, you can get a much better inference performance on computer vision tasks!<\/p>\n\n\n\n<p>We also recommend you check <a href=\"https:\/\/deci.ai\/blog\/convert-pytorch-model-tensorrt-deploy\/\">Deci<\/a> Platform for Fast Conversion to TensorRT\u2122<\/p>\n\n\n\n<p>The table below summarizes the optimization results and proves that the optimized TensorRT\u2122 model is better at inference in every way.<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/deci.ai\/wp-content\/uploads\/2022\/01\/deci-pytorch-tensorrt-image9.png\" alt=\"\" class=\"wp-image-8379\" width=\"695\" height=\"118\"\/><figcaption>Source: <a href=\"https:\/\/deci.ai\/blog\/convert-pytorch-model-tensorrt-deploy\/\">How to Convert a PyTorch Model to TensorRT\u2122 and Deploy it in 10 Minutes<\/a><br><a href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&amp;url=https:\/\/deci.ai\/blog\/convert-pytorch-model-tensorrt-deploy\/&amp;title=How%20to%20Convert%20a%20PyTorch%20Model%20to%20TensorRT%E2%84%A2%20and%20Deploy%20it%20in%2010%20Minutes&amp;summary=Learn%20how%20to%20convert%20a%20PyTorch%20model%20to%20NVIDIA%E2%80%99s%20TensorRT%E2%84%A2%20model%20in%20just%2010%20minutes.%20It%E2%80%99s%20simple%20and%20you%20don%E2%80%99t%20need%20any%20prior%20knowledge.&amp;source=\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><br><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">TensorRT and NVIDIA Jetson Projects<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Get started with <a href=\"https:\/\/github.com\/dusty-nv\/jetson-inference\">Hello AI World<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-21.png\" alt=\"\" class=\"wp-image-70625\" width=\"681\" height=\"362\" srcset=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-21.png 908w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-21-300x159.png 300w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-21-768x408.png 768w\" sizes=\"(max-width: 681px) 100vw, 681px\" \/><figcaption>NVIDIA Jetson <a href=\"https:\/\/github.com\/dusty-nv\/jetson-inference\">Hello AI World<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>Hello AI World is a guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. It will show you how to use TensorRT to efficiently deploy neural networks onto the embedded Jetson platform, improving performance and power efficiency using graph optimizations, kernel fusion, and FP16\/INT8 precision.<\/p>\n\n\n\n<p>This guide would mainly cover image classification, object detection, semantic segmentation, pose estimation, and mono depth. Video tutorials for each model can be found in this GitHub <a href=\"https:\/\/github.com\/dusty-nv\/jetson-inference\">link<\/a>.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&amp;url=https:\/\/deci.ai\/blog\/convert-pytorch-model-tensorrt-deploy\/&amp;title=How%20to%20Convert%20a%20PyTorch%20Model%20to%20TensorRT%E2%84%A2%20and%20Deploy%20it%20in%2010%20Minutes&amp;summary=Learn%20how%20to%20convert%20a%20PyTorch%20model%20to%20NVIDIA%E2%80%99s%20TensorRT%E2%84%A2%20model%20in%20just%2010%20minutes.%20It%E2%80%99s%20simple%20and%20you%20don%E2%80%99t%20need%20any%20prior%20knowledge.&amp;source=\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><strong>MMDetection<\/strong> is an open-source object detection toolbox based on the previously mentioned PyTorch. It consists of training recipes, pre-trained models, and dataset support. It runs on Linux, Windows, and macOS and requires Python 3.6+, CUDA 9.2+, and PyTorch 1.5+. They have also released a library&nbsp;<a href=\"https:\/\/github.com\/open-mmlab\/mmcv\">mmcv<\/a>,&nbsp;for computer vision research. Through the method of module calling, we can implement a new algorithm with a small amount of code. Greatly improve the code reuse rate.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/QUy2WnsMt8ly3n-8Zm8HpsQHxU4rONUpyt-9EKsxiHh8l6DtAXL427ooE3RZdlwz64dtaz_ego9hV49luBLqA-pz-KFfLZ1jTMG8pv7vz1nTgL-USYexg0p1_Y28AtZrkkuK1odZVByZhKHihd2GDPA\" alt=\"\"\/><\/figure>\n\n\n\n<p>MMDeploy is an open-source deep learning model deployment toolset. It is a part of the&nbsp;<a href=\"https:\/\/openmmlab.com\/\">OpenMMLab<\/a>&nbsp;project. Check this&nbsp;<a href=\"https:\/\/github.com\/open-mmlab\/mmdeploy\/blob\/master\/docs\/en\/01-how-to-build\/jetsons.md\">guide<\/a>&nbsp;to learn how to install MMDeploy on NVIDIA Jetson edge platforms such as Seeed\u2019s reComputer.&nbsp;<\/p>\n\n\n\n<p>The Model Converter of MMDeploy on Jetson platforms depends on&nbsp;<a href=\"https:\/\/github.com\/open-mmlab\/mmcv\">MMCV<\/a>&nbsp;and the inference engine&nbsp;<a href=\"https:\/\/developer.nvidia.com\/tensorrt\">TensorRT<\/a>. While MMDeploy C\/C++ Inference SDK relies on&nbsp;<a href=\"https:\/\/github.com\/gabime\/spdlog\">spdlog<\/a>, OpenCV and&nbsp;<a href=\"https:\/\/github.com\/openppl-public\/ppl.cv\">ppl.cv<\/a>,&nbsp;as well as TensorRT.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.hackster.io\/cristian-lazo-quispe\/real-time-road-space-rationing-control-using-jetson-nano-89c2da\">Automatic License Plate Recognition<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/yQ-S-UYjh-JgJzm4NCziybjdJOBRbKZ2pJhNqBeXSyFFQeUKcwcOOWgpSVBjmUCp0Rfnr7BdLPm56YzV5YVrCxx9tRUOO6r9xUWC_8ILHIJhYF_8ZJKptQ35UrUUptVew6D-ANvTcrN0g2xoui1d_zQ\" alt=\"\" width=\"450\" height=\"338\"\/><figcaption>Source: <a href=\"https:\/\/www.hackster.io\/cristian-lazo-quispe\/real-time-road-space-rationing-control-using-jetson-nano-89c2da\">Cristian Lazo Quispe<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>Lima, the capital city of Peru, has the third worst traffic in the world in 2018. Thus, they implemented a driving restriction policy where on Monday and Wednesday, only odd-number license plates are allowed, and on Tuesday and Thursday, only even-number license plates can drive out. As of now, traffic police officers are checking manually on the streets, which is time-consuming and a waste of taxpayer money. With automation, manual labor can be reduced to a minimum, and all spots on the highway can be detected as well!&nbsp;<\/p>\n\n\n\n<p>This solution is portable and cheap and can be used for other cities that are facing similar traffic congestion issues. The webcam would capture real-time video of the streets and detect the cars and their license plates using a Mobilenet SSD model optimized for TensorRT. It will then be sent to the OpenALPR module to check if it is compliant with the law on different days. A buzzer will also sound out once an offender is detected.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Grove-16x2-LCD-White-on-Blue.html?queryID=cb151568714c67da2fe0d7fe4ac7b78d&amp;objectID=21&amp;indexName=bazaar_retailer_products\">16&#215;2 LCD<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/300K-Pixel-USB-2-0-Mini-Webcam-p-1499.html?queryID=b9a22f77ad98c4abdb64c7ffcb4e12c5&amp;objectID=1382&amp;indexName=bazaar_retailer_products\">USB Webcam<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Grove-Buzzer.html?queryID=ca622408393dff2701b33e51a3c56c40&amp;objectID=1805&amp;indexName=bazaar_retailer_products\">Buzzer<\/a><\/li><li>TensorFlow<\/li><li>TensorRT<\/li><li>pycuda<\/li><li>LLVM<\/li><li>Numba<\/li><li>scikit-learn<\/li><li>SciPy<\/li><li>OpenALPR<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.hackster.io\/MatPiech\/deep-edge-tracker-08c8c8\">Traffic Light Management<\/a><\/h3>\n\n\n\n<p>After an ALPR project, you can try your hand at traffic light management which also aims to reduce traffic congestion!<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-20.png\" alt=\"\" class=\"wp-image-70624\" width=\"497\" height=\"368\" srcset=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-20.png 663w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-20-300x222.png 300w\" sizes=\"(max-width: 497px) 100vw, 497px\" \/><figcaption>Source: <a href=\"https:\/\/www.hackster.io\/MatPiech\/deep-edge-tracker-08c8c8\">Mateusz Piechocki, Bartosz Ptak<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>Traffic lights are the main causes of traffic bottlenecks if not done properly. This project seeks to solve such issues using <a href=\"https:\/\/arxiv.org\/abs\/1703.07402\">DeepSORT<\/a> object tracking algorithm based on YOLOv4 to ensure a real-time response. TensorRT was used for optimization and quantization on Jetson Xavier NX.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Xavier-NX-Developer-Kit-p-4573.html?queryID=949859053e6430c64abbcb7f07adb60b&amp;objectID=4573&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Xavier NX<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-20-1-H2-p-5329.html\">reComputer J2012 (Jetson Xavier NX)<\/a><\/li><li>Intel Neural Compute Stick 2<\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Raspberry-Pi-4B-Basic-Starter-Kit-4GB-p-4264.html?queryID=62416b0d3a4be0faea22e1779f82021a&amp;objectID=4264&amp;indexName=bazaar_retailer_products\">Raspberry Pi 4 Model B<\/a><\/li><li>USB Multifunction Tester<\/li><li>TensorFlow<\/li><li>TensorRT<\/li><li>OpenVINO<\/li><li>Raspberry Pi Raspbian<\/li><li>DeepSORT<\/li><li>YOLOv4<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/github.com\/JardinRyu\/Jetson_Nano_People_Counting\">Real-Time People Tracking &amp; Counting<\/a><\/h3>\n\n\n\n<p>This project is able to detect real-time information about people coming in and out of a certain location (indicated by a line). It is significantly better and cheaper than hiring a human counter standing by a mall entrance. It is less prone to human errors, and costs will be significantly lower.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010 (Jetson Nano)<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Raspberry-Pi-Camera-Module-V2.html?queryID=1fd6ff647ef082ce92cc35ad4f9d01f7&amp;objectID=375&amp;indexName=bazaar_retailer_products\">Raspberry Pi Camera<\/a> \/ <a href=\"https:\/\/www.seeedstudio.com\/300K-Pixel-USB-2-0-Mini-Webcam-p-1499.html?queryID=b9a22f77ad98c4abdb64c7ffcb4e12c5&amp;objectID=1382&amp;indexName=bazaar_retailer_products\">USB Webcam<\/a><\/li><li>TensorRT<\/li><li>JetPack<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/github.com\/JordanMicahBennett\/Smart-Ai-Pothole-Detector------Powered-by-Tensorflow-TensorRT-on-Google-Colab-and-or-Jetson-Nano\">Pothole Detector&nbsp;<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/rCoqbwozvcOOPW6TvhlmUjhq_C557QZyhtr4AxSaRpW11imO-OFAnsrjm_i7doiJi5Ef3dgj29c62-Apf9JHGdOj7UXTVkThZVZWPv5bz0FtMHE2cWyzAzk-9hgoWx1in0M0IkYUk16tNJ8XjoGrCOI\" alt=\"\" width=\"480\" height=\"323\"\/><figcaption>Source: <a href=\"https:\/\/github.com\/JordanMicahBennett\/Smart-Ai-Pothole-Detector------Powered-by-Tensorflow-TensorRT-on-Google-Colab-and-or-Jetson-Nano\">Jordan Bennett<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>Potholes damage cars and can cause cracks and bends on rims which will affect the wheel alignment and cost a significant amount of money to change out. This project seeks to help the government or local authorities to seek out potholes and prevent this issue from ever happening.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010 (Jetson Nano)<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Raspberry-Pi-Camera-Module-V2.html?queryID=1fd6ff647ef082ce92cc35ad4f9d01f7&amp;objectID=375&amp;indexName=bazaar_retailer_products\">Raspberry Pi Camera<\/a> \/ <a href=\"https:\/\/www.seeedstudio.com\/300K-Pixel-USB-2-0-Mini-Webcam-p-1499.html?queryID=b9a22f77ad98c4abdb64c7ffcb4e12c5&amp;objectID=1382&amp;indexName=bazaar_retailer_products\">USB Webcam<\/a><\/li><li>TensorRT<\/li><li>TensorFlow<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.hackster.io\/AdamMiltonBarker\/leukemia-detection-with-nvidia-jetson-nano-e535b4\">Leukemia Classifier<\/a><\/h3>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"Acute Lymphoblastic Leukemia Jetson Nano Classifier\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/VVTEp0O-IiA?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>This medical support project is to detect Acute Lymphoblastic Leukemia (<a href=\"https:\/\/www.leukaemiamedtechresearch.org.uk\/\">ALL<\/a>). ALL is the most common leukemia in children and accounts for up to 20% of acute leukemia in adults. It was developed using Intel\u2019s <a href=\"https:\/\/www.intel.com\/content\/www\/us\/en\/developer\/tools\/oneapi\/overview.html#gs.8wz7pw\">oneAPI<\/a> and <a href=\"https:\/\/www.intel.com\/content\/www\/us\/en\/developer\/articles\/guide\/getting-started-with-intel-optimization-of-pytorch.html\">Optimization<\/a>. TensorRT was used for high-performance inference on Jetson Nano to accelerate the training process.<\/p>\n\n\n\n<p>However, a few disclaimers about this project:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Even though this model may be accurate and shows good results on paper and in real-world testing, it is trained on a small set of data.&nbsp;<\/li><li>No doctors, medical or cancer experts were involved in contributing to this repository.<\/li><\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010<\/a>.&nbsp;<\/li><li>Intel NUC Kit<\/li><li>Intel oneAPI<\/li><li>TensorFlow<\/li><li>TensorFlow RunTime<\/li><li>TensorRT<\/li><li>ONNX<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.hackster.io\/shahizat\/face-mask-detection-system-using-ai-and-nvidia-jetson-board-3cfae7\">Face Mask Detection System<\/a><\/h3>\n\n\n\n<p>With the Covid-19 pandemic, everyone is wearing a face mask nowadays. Thus many facial recognition technologies are finding it very hard to detect faces.&nbsp;<\/p>\n\n\n\n<p>This project uses the SSD-MobileNet algorithm, which is the fastest model available for the single-shot method on NVIDIA Jetson boards. It also uses the Kaggle dataset, which can be downloaded <a href=\"https:\/\/www.kaggle.com\/datasets\/andrewmvd\/face-mask-detection\/metadata\">here<\/a>. TensorRT was used to improve detection time, allowing Jetson Xavier NX to achieve higher than 100FPS.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010 (Jetson Nano)<\/a> (this project used <a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Xavier-NX-Developer-Kit-p-4573.html?queryID=949859053e6430c64abbcb7f07adb60b&amp;objectID=4573&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Xavier NX<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-20-1-H2-p-5329.html\">reComputer J2012 (Jetson Xavier NX)<\/a>)<\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Raspberry-Pi-Camera-Module-V2.html?queryID=1fd6ff647ef082ce92cc35ad4f9d01f7&amp;objectID=375&amp;indexName=bazaar_retailer_products\">Raspberry Pi Camera<\/a> (this project used a <a href=\"https:\/\/www.seeedstudio.com\/300K-Pixel-USB-2-0-Mini-Webcam-p-1499.html?queryID=b9a22f77ad98c4abdb64c7ffcb4e12c5&amp;objectID=1382&amp;indexName=bazaar_retailer_products\">USB Webcam<\/a>)<\/li><li>PyTorch<\/li><li>TensorRT<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.hackster.io\/shahizat\/safety-helmet-detection-system-based-on-yolov7-algorithm-3d4cef\">Safety Helmet Detection System<\/a><\/h3>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"Safety helmet detection system on NVIDIA Jetson Xavier NX board using YOLO v7\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/ysTvezsZFAs?start=55&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>Safety helmets are the most important equipment in industrial places to protect workers against accidents. This project would seek to detect whether the workers are abiding and wearing their safety helmets during work. It uses the latest YOLOv7 to train a custom object detection model to detect workers wearing safety helmets, and TensorRT was used to run the deep learning platform.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a> \/ <a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Xavier-NX-Developer-Kit-p-4573.html?queryID=949859053e6430c64abbcb7f07adb60b&amp;objectID=4573&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Xavier NX<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010 (Jetson Nano)<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-20-1-H2-p-5329.html\">reComputer J2012 (Jetson Xavier NX)<\/a><\/li><li>Microsoft VScode<\/li><li>YOLOv7<\/li><li>TensorRT<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.hackster.io\/bluetiger9\/deep-eye-deepstream-based-video-analytics-made-easy-d6dc5e\">DeepStream Video Analytics Robot<\/a><\/h3>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/Tk64ndr53opfWmVDyRosn8Id2RrgDvFtCHZc2loa2ZUS2ZPCKqqU5G4u2gGIBQ4H3d0tmyAv4r6NTGk0Ib0w2YdeyNxmxmWpG6HN1uvGaWay8wy4Rw-Ou0IHmXG6iKHrzdtf5xpXl5PYVrsLzVGPonw\" alt=\"\"\/><\/figure>\n\n\n\n<p>Source: <a href=\"https:\/\/www.hackster.io\/bluetiger9\/deep-eye-deepstream-based-video-analytics-made-easy-d6dc5e\"><strong>Attila T\u0151k\u00e9s<\/strong><\/a><\/p>\n\n\n\n<p>Deep Eye, the robot above, is a rapid prototyping platform for NVIDIA DeepStream-based video analytics application. TensorRT allowed Deep Eye to implement hardware-accelerated inference and detection.<\/p>\n\n\n\n<p>There are 3 main components:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Hardware platform to be used with Jetson Nano<\/li><li>DeepLib, an easy to use python library which allows for easy DeepStream-based video processing<\/li><li>Web IDE that allows easy creation of DeepStream-based application<\/li><\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010 (Jetson Nano)<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Raspberry-Pi-Camera-Module-V2.html?queryID=1fd6ff647ef082ce92cc35ad4f9d01f7&amp;objectID=375&amp;indexName=bazaar_retailer_products\">Raspberry Pi Camera<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/TowerPro-Airplane-9g-SG-90-Mini-Servo-p-654.html?queryID=919d5d0322e4690327daf04c85c6c6e3&amp;objectID=1898&amp;indexName=bazaar_retailer_products\">SG90 Micro Servo Motor<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Small-Size-and-High-Torque-Stepper-Motor-24BYJ48-p-1922.html?queryID=d84dba75c51a09dc4fa297954432bd0e&amp;objectID=2742&amp;indexName=bazaar_retailer_products\">Stepper Motor<\/a><\/li><li>Adafruit 16-Channel 12-bit PWM\/Servo Shield &#8211; I2C interface<\/li><li><a href=\"https:\/\/www.seeedstudio.com\/PSMD-Triple-Axis-Driver-p-1029.html?queryID=5193345e09a09add5e6e786ba7f3ae66&amp;objectID=2214&amp;indexName=bazaar_retailer_products\">A4988 Stepper motor driver board<\/a><\/li><li>Prototype (designed in FreeCAD, tutorial found <a href=\"https:\/\/www.hackster.io\/bluetiger9\/deep-eye-deepstream-based-video-analytics-made-easy-d6dc5e\">here<\/a>)<\/li><li>JetPack<\/li><li>DeepStream SDK<\/li><li>DeepStream Python Bindings<\/li><li>TensorRT<\/li><li>SDK Manager<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.hackster.io\/actionai\/actionai-custom-tracking-multiperson-activity-recognition-fa5cb5\">Action Tracking &amp; Activity Recognition<\/a><\/h3>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"Quickly Prototype Human Activity Recognition Apps with ActionAI\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/D336kYio_TU?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>This intelligent video analytics project will seek to perform multi-person tracking and activity recognition. Firstly, they used OpenCV to acquire and process videos. Next, they used Openpose for pose estimation, and to track person instances, they used a scikit-learn implementation. Together with TensorRT converters for optimized inference on Jetson Nano, they have successfully completed their tracking and recognition project.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Here\u2019s what you need for this project:<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Development-Kit-B01-p-4437.html?queryID=d8d87d1dd46413403b6bff7e07d40296&amp;objectID=4437&amp;indexName=bazaar_retailer_products\">NVIDIA Jetson Nano<\/a>\/ <a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010 (Jetson Nano)<\/a><\/li><li>Webcam<\/li><li>TensorFlow<\/li><li>TensorRT<\/li><li>PyTorch<\/li><li>DeepStream SDK<\/li><li>scikit-learn<\/li><li>Openpose<\/li><li>OpenCV<\/li><\/ul>\n\n\n\n<p>Here is a <a href=\"https:\/\/docs.donkeycar.com\/guide\/robot_sbc\/tensorrt_jetson_nano\/\">guide<\/a> on how to use TensorRT on NVIDIA Jetson Nano. Take a look and hopefully try it out with any projects listed above! You can also take a look at Jetson Nano products below that can start you off in your journey.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">NVIDIA Jetson Products you might wanna try now with TensorRT fasted inference!<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-AGX-Orin-Developer-Kit-p-5314.html\">Jetson AGX Orin Dev Kit<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Jetson-Xavier-AGX-H01-Kit-p-5283.html\">Jetson AGX Xavier H01 Kit<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/ReTerminal-with-CM4-p-4904.html?queryID=d6f3ce9bdb090e299274bb938e026c2a&amp;objectID=4904&amp;indexName=bazaar_retailer_products\">reTerminal<\/a>, powered by Raspberry Pi CM4<\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer J1010 (Jetson Nano)<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-H0-p-5335.html\">reComputer J1020 (Jetson Nano)<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/Jetson-20-1-H2-p-5329.html\">reComputer J2012 (Jetson Xavier NX)<\/a><\/li><li><a href=\"https:\/\/www.seeedstudio.com\/reServer-Jetson-20-1-H2-p-5337.html\">reComputer J2032 (Jetson Xavier NX)<\/a><\/li><\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Why use TensorRT? TensorRT-based applications perform up to 36x faster than CPU-only platforms during inference.<\/p>\n","protected":false},"author":200,"featured_media":70623,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_lmt_disableupdate":"","_lmt_disable":"","_price":"","_stock":"","_tribe_ticket_header":"","_tribe_default_ticket_provider":"","_tribe_ticket_capacity":"0","_ticket_start_date":"","_ticket_end_date":"","_tribe_ticket_show_description":"","_tribe_ticket_show_not_going":false,"_tribe_ticket_use_global_stock":"","_tribe_ticket_global_stock_level":"","_global_stock_mode":"","_global_stock_cap":"","_tribe_rsvp_for_event":"","_tribe_ticket_going_count":"","_tribe_ticket_not_going_count":"","_tribe_tickets_list":"[]","_tribe_ticket_has_attendee_info_fields":false,"iawp_total_views":0,"footnotes":""},"categories":[4391],"tags":[4421,1312,4422,4420,4252],"class_list":["post-70622","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-build","tag-inference","tag-nvidia","tag-recomputer-jetson","tag-tensorrt","tag-yolov5"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano! - Latest News from Seeed Studio<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano! - Latest News from Seeed Studio\" \/>\n<meta property=\"og:description\" content=\"Why use TensorRT? TensorRT-based applications perform up to 36x faster than CPU-only platforms during inference.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/\" \/>\n<meta property=\"og:site_name\" content=\"Latest News from Seeed Studio\" \/>\n<meta property=\"article:published_time\" content=\"2022-08-23T11:16:48+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-08-29T05:56:56+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"628\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Elaine Wu\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Elaine Wu\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/\",\"url\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/\",\"name\":\"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano! - Latest News from Seeed Studio\",\"isPartOf\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg\",\"datePublished\":\"2022-08-23T11:16:48+00:00\",\"dateModified\":\"2022-08-29T05:56:56+00:00\",\"author\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#primaryimage\",\"url\":\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg\",\"contentUrl\":\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg\",\"width\":1200,\"height\":628},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.seeedstudio.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano!\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#website\",\"url\":\"https:\/\/www.seeedstudio.com\/blog\/\",\"name\":\"Latest News from Seeed Studio\",\"description\":\"Emerging IoT, AI and Autonomous Applications on the Edge\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.seeedstudio.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11\",\"name\":\"Elaine Wu\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g\",\"caption\":\"Elaine Wu\"},\"description\":\"Head of AI Robotics @seeed Every day holds new magic \u2728 on ne sait jamais\u2601\ufe0f\",\"sameAs\":[\"https:\/\/www.linkedin.com\/in\/elaine1994\/\"],\"url\":\"https:\/\/www.seeedstudio.com\/blog\/author\/elaine\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano! - Latest News from Seeed Studio","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/","og_locale":"en_US","og_type":"article","og_title":"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano! - Latest News from Seeed Studio","og_description":"Why use TensorRT? TensorRT-based applications perform up to 36x faster than CPU-only platforms during inference.","og_url":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/","og_site_name":"Latest News from Seeed Studio","article_published_time":"2022-08-23T11:16:48+00:00","article_modified_time":"2022-08-29T05:56:56+00:00","og_image":[{"width":1200,"height":628,"url":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg","type":"image\/jpeg"}],"author":"Elaine Wu","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Elaine Wu","Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/","url":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/","name":"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano! - Latest News from Seeed Studio","isPartOf":{"@id":"https:\/\/www.seeedstudio.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#primaryimage"},"image":{"@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#primaryimage"},"thumbnailUrl":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg","datePublished":"2022-08-23T11:16:48+00:00","dateModified":"2022-08-29T05:56:56+00:00","author":{"@id":"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11"},"breadcrumb":{"@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#primaryimage","url":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg","contentUrl":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg","width":1200,"height":628},{"@type":"BreadcrumbList","@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/23\/faster-inference-with-tensorrt-on-nvidia-jetson-run-yolov5-at-27-fps-on-jetson-nano\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.seeedstudio.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Faster YOLOv5 inference with TensorRT, Run YOLOv5 at 27 FPS on Jetson Nano!"}]},{"@type":"WebSite","@id":"https:\/\/www.seeedstudio.com\/blog\/#website","url":"https:\/\/www.seeedstudio.com\/blog\/","name":"Latest News from Seeed Studio","description":"Emerging IoT, AI and Autonomous Applications on the Edge","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.seeedstudio.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11","name":"Elaine Wu","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g","caption":"Elaine Wu"},"description":"Head of AI Robotics @seeed Every day holds new magic \u2728 on ne sait jamais\u2601\ufe0f","sameAs":["https:\/\/www.linkedin.com\/in\/elaine1994\/"],"url":"https:\/\/www.seeedstudio.com\/blog\/author\/elaine\/"}]}},"modified_by":"Elaine Wu","views":84362,"featured_image_urls":{"full":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg",1200,628,false],"thumbnail":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-80x80.jpg",80,80,true],"medium":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-300x157.jpg",300,157,true],"medium_large":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-768x402.jpg",640,335,true],"large":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-1030x539.jpg",640,335,true],"1536x1536":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg",1200,628,false],"2048x2048":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg",1200,628,false],"visody_icon":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg",32,17,false],"magazine-7-slider-full":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8.jpg",1200,628,false],"magazine-7-slider-center":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-936x628.jpg",936,628,true],"magazine-7-featured":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-1024x536.jpg",1024,536,true],"magazine-7-medium":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-720x380.jpg",720,380,true],"magazine-7-medium-square":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/8-675x450.jpg",675,450,true]},"author_info":{"display_name":"Elaine Wu","author_link":"https:\/\/www.seeedstudio.com\/blog\/author\/elaine\/"},"category_info":"<a href=\"https:\/\/www.seeedstudio.com\/blog\/category\/build\/\" rel=\"category tag\">Build<\/a>","tag_info":"Build","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts\/70622","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/users\/200"}],"replies":[{"embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/comments?post=70622"}],"version-history":[{"count":14,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts\/70622\/revisions"}],"predecessor-version":[{"id":70872,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts\/70622\/revisions\/70872"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/media\/70623"}],"wp:attachment":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/media?parent=70622"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/categories?post=70622"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/tags?post=70622"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}