{"id":70538,"date":"2022-08-19T20:42:26","date_gmt":"2022-08-19T12:42:26","guid":{"rendered":"https:\/\/www.seeedstudio.com\/blog\/?p=70538"},"modified":"2022-08-22T14:55:34","modified_gmt":"2022-08-22T06:55:34","slug":"computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices","status":"publish","type":"post","link":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/","title":{"rendered":"Computer Vision 101: what is computer vision, and how to implement CV with edge devices?"},"content":{"rendered":"\n<p>Computer vision is what enables computers and systems to derive meaningful information from digital images, videos, and other visual inputs. You can think of AI as the brain of computers and computer vision being the eyes of computers. It would take action or make recommendations based on the information received, and it has been rapidly developing and even surpassing humans in solving visual tasks. It is currently vital in many industries, such as medical diagnosis, autonomous driving, video monitoring, etc.&nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>But what exactly is computer vision?&nbsp;<\/strong><\/p>\n\n\n\n<p>Let&#8217;s walk you through the following concepts in this article:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>Definition of computer vision<\/li><li>How does computer vision work<\/li><li>Computer vision tasks, hello world of CV<\/li><li>Popular Computer Vision frameworks, libraries, and dev platform<\/li><li>Computer vision applications and tools<\/li><li>Implement CV at the edge<\/li><\/ol>\n\n\n\n<p>Hopefully, by the end of this article, you will understand what computer vision is and even try your hand at some projects yourself!<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">How does Computer Vision Work?<\/h2>\n\n\n\n<p>Computer vision requires a lot of data to function effectively, it analyses data repeatedly until it is able to discern distinctions and recognize images. The algorithmic models will allow the machine to learn by itself rather than manually having someone program it to recognize an image.&nbsp;<\/p>\n\n\n\n<p>The picture below is a simple illustration of the greyscale image buffer of Abraham Lincoln. Each pixel is represented by a single-8bit number, <strong>ranging from 0 (black) to 255 (white). On the right<\/strong> is what the<strong> software would read when u input an image.<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/tQlJsRknmGAT8e_s3RDUjr-dcA3rGnaLn1w05gWy714s7UU4PLX-c0ZaEwoR2c_eskczD_ky_EdBonfndEYIgmfzF8Gz1Gc78LvShZ6fbw_UbiD8zzYQee4BOrDCuy2iFe8lgKMbK7q_Enuf5Vg-O3Q\" alt=\"\"\/><figcaption>Source: <a href=\"https:\/\/www.datarobot.com\/blog\/introduction-to-computer-vision-what-it-is-and-how-it-works\/\">DataRobot<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">MNIST handwritten digit classification: \u201cHello World\u201d of Computer Vision<\/h2>\n\n\n\n<p>The MNIST data set comes from the National Institute of Standards and Technology in the United States and is a reduced version of NIST (National Institute of Standards and Technology). The training set consists of handwritten numbers from 250 different people, 50% of which are high school students. 50% were from the Census Bureau staff, and the test set was the same proportion of handwritten digit data.<\/p>\n\n\n\n<p>The <a href=\"http:\/\/yann.lecun.com\/exdb\/mnist\/\">MNIST handwritten digits dataset <\/a>has a training set of 60,000 examples and a test set of 10,000 examples. It is a subset of the larger set provided by NIST. Figures have been size normalized and centered in a fixed-size image. The original creator of the MNIST kept a list of some of the methods tested on it. In their original paper, they used support-vector machines(SVM) to get an error rate of 0.8%.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/Z6jV30FGI__pYPgzrF53uzS4vPqYHbXwHnv3ajR3BmglcTIv8glbOs4E1QxjmuG5urwQADYllWiBu8stbxJyPVAlBEIeOPhoZQvAzuCVgl6rSP-_kzOTyI7YWLzYRMJ9b5w2MsB_NPNjZWKyJn-TzRo\" alt=\"\"\/><figcaption><a href=\"https:\/\/en.wikipedia.org\/wiki\/File:MnistExamples.png\">Sample images from MNIST test dataset<\/a>, Wikipedia<\/figcaption><\/figure><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Computer Vision Tasks<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Image Classification<\/h3>\n\n\n\n<p>Image classification is a basic computer vision task where once the program detects an image, it is able to accurately predict the given image and classify it accordingly to its certain class (cat, orange, a human face).&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Image Localization<\/h3>\n\n\n\n<p>Image localization is a basic computer vision task that finds the object and draws a bounding box around it. The following image shows the difference between classification and localization.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/n7ERMNpFL7lwQ6b07VnP_QBdKkx_Cpy_P0dQWVwaFv7qW7z_qyoXf0xomHnGnPAvNmW8VJwCNyDOFes_VaDV4QI9bfIDuvh4-0ELLox4QA7IM3VzFpKVNkZCONP2pgaF16bILlf5uJUxXYeVRglw6WM\" alt=\"\"\/><figcaption>Source: <a href=\"https:\/\/miro.medium.com\/max\/608\/1*uI4AaqoDew9p9YRsVFDZNg.png\">Medium<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Object Detection<\/h3>\n\n\n\n<p>Object detection classifies and detects all objects in the image. However, it assigns a class to each individual object and draws a bounding box around it. It is different from classification and localization, as seen in the image below.&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/FUIf_i4N7tX-m2q2KqJix8Xj-4P_yE_dKAug2oyVYRmyj7h1zZhCr51naIgiGAQZIhnsXcbSiEFfBuHbF7CcdJLQTlgqj_rrlNMq5v7SokooHPHVhQebT-kRZE3ck0azw_UnDQTxtDzxsmDXxlbpeWI\" alt=\"\"\/><figcaption>Source: <a href=\"https:\/\/nirmalamurali.medium.com\/image-classification-vs-semantic-segmentation-vs-instance-segmentation-625c33a08d50\">Nirmala Murali<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Semantic Segmentation<\/h3>\n\n\n\n<p>Semantic segmentation is the next level to object detection. However, instead of drawing a bounding box around the object, it identifies the specific pixels in the image and segments them. Different classes would be assigned different colors, for example, grass = green and sheep = brown. It is commonly used for autonomous driving.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Instance Segmentation&nbsp;<\/h3>\n\n\n\n<p>Instance segmentation is a level-up version of semantic segmentation. Instead of assigning the same pixel values to all the objects in the same class, it segments and shows different instances of the same class. If there is more than one of the same object detected in the image, it would be labeled accordingly, as seen below, Sheep 1, Sheep 2, and Sheep 3. It is commonly used for crowd count.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/BP6Q9IwpKYPuCfh-rFVaPeC0Kjlu4tbglBRx9s5FRYXFLb5wP7czrftE956DjmMy5CC_50J4Rgzk_AVlb30tY0L7_5ENb1lNBrcq5NpjZj7uwT8STLbn2RavHaKeUfZ_dkZeXx9Ny4lv3BZOQnigjm8\" alt=\"\"\/><figcaption>Source: <a href=\"https:\/\/nirmalamurali.medium.com\/image-classification-vs-semantic-segmentation-vs-instance-segmentation-625c33a08d50\">Nirmala Murali<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Pose Estimation&nbsp;<\/h3>\n\n\n\n<p>Pose estimation is a way of estimating the position and orientation of the joints of a human body. It predicts and tracks the movement of the object by finding the location of key points. Based on this information, it would be able to compare various movements and postures and draw insights. It is commonly used in AR\/VR gaming and sports.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"644\" height=\"449\" src=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-18.png\" alt=\"\" class=\"wp-image-70540\" srcset=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-18.png 644w, https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-18-300x209.png 300w\" sizes=\"(max-width: 644px) 100vw, 644px\" \/><figcaption>Source: <a href=\"https:\/\/alwaysai.co\/blog\/using-pose-estimation-on-the-jetson-nano-with-alwaysai\">Using Pose Estimation on the Jetson Nano With alwaysAI<\/a><br><\/figcaption><\/figure><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Popular Computer Vision Libraries and frameworks<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/pytorch.org\/\">PyTorch<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/MKVRTly99jluHz9oGX7oMbiYbrGfLp1VMgGyZ8XmSU0X4F4kbrfDDyrNR1P_4SLec5UBQQkCnV7BM_ylLM2679lCvXSYT3U0WUKmjPw02u39gYhSOSljnaGZfzBNhiHD-u1-nINWfHEfSnQraAwYalU\" alt=\"\" width=\"300\" height=\"150\"\/><\/figure><\/div>\n\n\n\n<p>PyTorch is an open-source ML library that uses dynamic computation, which allows for greater flexibility in building complex architectures. From the name, you can know that it supports Python very well. Although its underlying optimization is still in C, basically all its frameworks are written in Python, which makes you look at its source code more concise.&nbsp;&nbsp;It supports both CPU and GPU computations.<\/p>\n\n\n\n<p>The <a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/containers\/pytorch\">PyTorch NGC Container<\/a> is optimized for GPU acceleration and contains a validated set of libraries that enable and optimize GPU performance. This container also contains software for accelerating ETL (<a href=\"https:\/\/developer.nvidia.com\/dali\">DALI<\/a>, <a href=\"https:\/\/rapids.ai\/\">RAPIDS<\/a>), Training (<a href=\"https:\/\/developer.nvidia.com\/cudnn\">cuDNN<\/a>, <a href=\"https:\/\/developer.nvidia.com\/nccl\">NCCL<\/a>), and Inference (<a href=\"https:\/\/nvidia.github.io\/Torch-TensorRT\/\">TensorRT<\/a>) workloads.<\/p>\n\n\n\n<p>NVIDIA Jetson is one of the best platforms to work with PyTorch models mainly due to the inference support, allowing it to run most common computer vision models that can be transfer-learned with PyTorch. Coupled with TensorRT and PyTorch API technology, you are able to seamlessly run PyTorch models on NVIDIA Jetson and also Raspberry Pi. Learn more on how to do so from this <a href=\"https:\/\/pytorch.org\/blog\/running-pytorch-models-on-jetson-nano\/\">blog post<\/a> by PyTorch.&nbsp;<\/p>\n\n\n\n<p>If you just get started, don&#8217;t miss out <a href=\"https:\/\/pytorch.org\/tutorials\/beginner\/transfer_learning_tutorial.html\">Torchvision object detection fine-tuning tutorial<\/a>. The <a href=\"https:\/\/pytorch.org\/vision\/stable\/index.html#module-torchvision\">torchvision<\/a> package consists of popular datasets, model architectures, and common image transformations for computer vision.<\/p>\n\n\n\n<p><a href=\"https:\/\/pytorch.org\/vision\/stable\/index.html#module-torchvision\">torchvision<\/a> &nbsp;includes the following packages:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>vision.datasets: Several commonly used vision datasets, which can be downloaded and loaded, the main advanced usage here is to see how the source code writes your own subclass of Dataset<\/li><li>vision.models: AlexNet, VGG, ResNet, and Densenet with trained parameters.<\/li><\/ul>\n\n\n\n<p>You can also find this <a href=\"https:\/\/pytorch.org\/tutorials\/beginner\/transfer_learning_tutorial.html\">tutorial<\/a> about how to train a convolutional neural network for image classification using transfer learning. Read more about transfer learning at <a href=\"https:\/\/cs231n.github.io\/transfer-learning\/\">cs231n notes<\/a>.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/github.com\/ultralytics\/yolov5\">YOLOv5<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/Ahjnx_oEelapHKDMbNQ3-CWQ_DhAzLerHLsUFHSit9LGJXBNB6PS-Xx4BKy_iYtvzjpHvkj7eneK-jG2uZpEUQQIj4TA07OaJxK3eS3cu-IM5tPFH-rzCDwvUXJDPtZbYN6vttQ5RlWFlyD0rxRfdNA\" alt=\"\" width=\"452\" height=\"297\"\/><figcaption>YOLOv5 by Ultralytics<\/figcaption><\/figure><\/div>\n\n\n\n<p>YOLOv5 is a family of object detection architectures and models pre-trained on the COCO dataset and represent <a href=\"https:\/\/ultralytics.com\/\">Ultralytics<\/a> open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.YOLOv5 is implemented in PyTorch, which benefits from the mature PyTorch ecosystem: simpler implementation and easier deployment. YOLOv5 makes deployment to mobile devices simpler as the model can be easily compiled to ONNX and CoreML. Check our wiki on how to build a <a href=\"https:\/\/wiki.seeedstudio.com\/YOLOv5-Object-Detection-Jetson\/\">custom model with fewer datasets using YOLOv5 and deploying it to NVIDIA Jetson Nano and Xavier NX.&nbsp;<\/a><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/26833433\/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png\" alt=\"\" width=\"600\" height=\"300\"\/><figcaption>Why YOLOv5<\/figcaption><\/figure><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/github.com\/open-mmlab\/mmdetection\">MMDetection<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/k-2b0Z3M5pmIAhqB19t5XGgZo1kxRE7r7i_FW9vc6erMnLktss9jl30aVQIQHgMVc2xeD79QMiInv28i-lmIfjcOLQyiQL9I4r7gCE9NN8aRzpHC_RUOnw9rf7XKAdVFnxfPFYqwH7u5nFXcEWTwsAM\" alt=\"\" width=\"473\" height=\"145\"\/><\/figure><\/div>\n\n\n\n<p>MMDetection is an open-source object detection toolbox based on the previously mentioned PyTorch. It consists of training recipes, pre-trained models, and dataset support. It runs on Linux, Windows, and macOS and requires Python 3.6+, CUDA 9.2+, and PyTorch 1.5+. They have also released a library <a href=\"https:\/\/github.com\/open-mmlab\/mmcv\">mmcv<\/a> for computer vision research. Through the method of module calling, we can implement a new algorithm with a small amount of code. Greatly improve the code reuse rate.<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"602\" height=\"128\" src=\"https:\/\/lh5.googleusercontent.com\/QUy2WnsMt8ly3n-8Zm8HpsQHxU4rONUpyt-9EKsxiHh8l6DtAXL427ooE3RZdlwz64dtaz_ego9hV49luBLqA-pz-KFfLZ1jTMG8pv7vz1nTgL-USYexg0p1_Y28AtZrkkuK1odZVByZhKHihd2GDPA\"><\/p>\n\n\n\n<p>MMDeploy is an open-source deep learning model deployment toolset. It is a part of the <a href=\"https:\/\/openmmlab.com\/\">OpenMMLab<\/a> project. Check this <a href=\"https:\/\/github.com\/open-mmlab\/mmdeploy\/blob\/master\/docs\/en\/01-how-to-build\/jetsons.md\">guide<\/a> to learn how to install MMDeploy on NVIDIA Jetson edge platforms such as Seeed\u2019s reComputer.&nbsp; <\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/opencv.org\/\">OpenCV<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/jPGNi2oK5BaMz_UmJdIyrVKYYis4A0kULjfJxo8Xk9s7QlPH2Jkg60A_83DdhZiDv1-bkB5RVZJaLIRvfPy-xT2GqHbsaE3DB5dIA9OYFP4Mdd2x1VyDARisWmOFU8H7yByWFdMYQH-IOFoQaoFh3jQ\" alt=\"\" width=\"214\" height=\"189\"\/><\/figure><\/div>\n\n\n\n<p>OpenCV is one of the most popular open-source computer vision and ML software libraries. It was built to provide a common infrastructure for computer vision applications. It runs on Windows, Linux, Android, and macOS and can be used in Python, Java, C++, and MATLAB.&nbsp;<\/p>\n\n\n\n<p>A few use cases of OpenCV include:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>2D and 3D Feature Toolkits<\/li><li>Facial Recognition Application<\/li><li>Gesture Recognition<\/li><li>Motion Understanding<\/li><li>Human-Computer Interaction<\/li><li>Object Detection<\/li><li>Segmentation and Recognition<\/li><\/ul>\n\n\n\n<p>Learn how to use OpenCV from this crash course on <a href=\"https:\/\/www.youtube.com\/watch?v=Z846tkgl9-U\">youtube<\/a>! You will get to learn topics such as Object Detection and Tracking, Edge and Face Detection, Image Enhancement, and many more.<\/p>\n\n\n\n<p>OpenCV includes the GPU module, which contains all the GPU-accelerated stuff. With support from NVIDIA, work on the module began in 2010, ahead of its initial release in the Spring of 2011. It includes accelerated code for significant parts of the library, is still growing, and is adapting to new computing technologies and GPU architectures. Our partner <a href=\"http:\/\/alwaysai.co\">alwaysAI<\/a> also built OpenCV as a core piece of our edge runtime environment. That means in every alwaysAI application, you can add import cv2 and use OpenCV in your app. alwaysAI built a suite of tools around OpenCV to make the end-to-end process seamless and to help solve some of the common pain points unique to working with edge devices. <\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/gE9ySwUrLIzIewIujZqxDgCdVd0NaWJlZ2N1yy9VEdjCqTxBq06DauEh55xb07dfh2I1_qp-s7QED1cmS5XFpcPMhHt9xkaHDPase_a9KHW6XIuxsbKWZmKph5V8R3njeJYjPAne12A4n3ciQHHDE8g\" alt=\"\" width=\"496\" height=\"329\"\/><figcaption>alwaysAI Video Streamer<\/figcaption><\/figure><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/keras.io\/\">Keras<\/a><\/h3>\n\n\n\n<p>Keras is an open-source artificial neural network library written in Python that can be used as a high-level application programming interface for Tensorflow, Microsoft-CNTK, and Theano for deep learning model design, debugging, evaluation, application, and visualization.<\/p>\n\n\n\n<p>Find all code examples at <a href=\"https:\/\/keras.io\/examples\/vision\/\">Keras<\/a>.<\/p>\n\n\n\n<p><a href=\"https:\/\/keras.io\/guides\/keras_cv\/\">KerasCV<\/a><\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/keras.io\/guides\/keras_cv\/cut_mix_mix_up_and_rand_augment\">CutMix, MixUp, and RandAugment image augmentation with KerasCV<\/a><\/li><li><a href=\"https:\/\/keras.io\/guides\/keras_cv\/custom_image_augmentations\">Custom Image Augmentations with BaseImageAugmentationLayer<\/a><\/li><li><a href=\"https:\/\/keras.io\/guides\/keras_cv\/coco_metrics\">Using KerasCV COCO Metrics<\/a><\/li><\/ul>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.mathworks.com\/products\/matlab.html\">MATLAB<\/a><\/h3>\n\n\n\n<p>Embedded vision involves applying image processing to embedded systems, especially devices. The main aspects of the embedded vision development workflow include algorithm design, system modeling, collaboration, and deployment of vision algorithms. Engineers can use MATLAB\u00ae and Simulink\u00ae to develop and deploy image processing and computer vision systems to the embedded target hardware.&nbsp;<\/p>\n\n\n\n<p>With MATLAB and Simulink, you can develop algorithms and model systems, integrate third-party software frameworks, and generate code for the target hardware platform. Check this <a href=\"https:\/\/ww2.mathworks.cn\/help\/supportpkg\/nvidia\/ref\/jetson.html\">guide<\/a> on how to connect MATLAB with NVIDIA Jetson.&nbsp;<\/p>\n\n\n\n<p>Explore the <a href=\"https:\/\/ww2.mathworks.cn\/videos\/image-processing-made-easy-120742.html\">fundamentals of image processing using MATLAB<\/a>.&nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/developer.nvidia.com\/embedded\/vpi\">NVIDIA VPI<\/a><\/h3>\n\n\n\n<p>NVIDIA\u00ae Vision Programming Interface (VPI) is a software library that implements computer vision and image processing algorithms on several computing hardware platforms available in NVIDIA embedded and discrete devices. VPI provides a unified API to both CPU and NVIDIA CUDA algorithm implementations, as well as interoperability between VPI and OpenCV, and CUDA.&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/_5dewvZvLYSB4jLY95HS5LAkHOXU5Z6q2zF5i1SkhCGUVUTN90tatgXAWXWuKk_yygU830k-touEZacd2CIQgokcGtYCjv7FqzHJVYiLHwyLvuBcwtbp99QG3gphM9HlqPIOLJuYeinB53qEAWBrdzVv0ZsNE18QxvNquu3l3GFbJ4MsZkEncnbc9g\" alt=\"\" width=\"462\" height=\"261\"\/><figcaption>Kanade-Lucas-Tomasi (KLT) Feature Tracker, <a href=\"https:\/\/developer.nvidia.com\/embedded\/vpi\">NVIDIA<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\"><a href=\"https:\/\/www.tensorflow.org\/\">TensorFlow<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/FsaUFTA8OFPtjS9qJNo0V48aHpKY5LxdQ5n4OrFo1fDpSEgXhZW9WswlQmbnjhcrrzlAQI8n9U4CnBdqj5IyNaS4PBJsDVNcjFiRSHGz38z_efL2iN0POx02UdZJJ7CzUH2ogWYIF3fsUXT5DPPLdnw\" alt=\"\" width=\"300\" height=\"192\"\/><\/figure><\/div>\n\n\n\n<p>TensorFlow is an end-to-end open-source ML platform that is capable of performing a myriad range of tasks, including computer vision. TensorFlow Lite allows you to run models on mobile and edge devices, while TensorFlow JS is for the web. It runs on Windows, macOS, and WSL2, supporting Python, C, C++, Java, etc.&nbsp;<\/p>\n\n\n\n<p>The main uses are:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Image Classification<\/li><li>Objection Detection and Segmentation<\/li><li>Image Stylization<\/li><li>Generative Adversarial Networks<\/li><\/ul>\n\n\n\n<p>Get started easily:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.youtube.com\/watch?v=QRs619bWAow\">TensorFlow Models Accelerated for NVIDIA Jetson<\/a><\/li><li><a href=\"https:\/\/blog.tensorflow.org\/2021\/07\/real-world-ml-with-coral-manufacturing.html\">Real-World ML with Coral: Manufacturing<\/a>&nbsp;<\/li><\/ul>\n\n\n\n<p>Check the <a href=\"https:\/\/www.seeedstudio.com\/Coral-Dev-Board-p-2900.html\">Coral dev board <\/a>at Seeed and enjoy an exclusive educational discount!&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/8j3KjhZWbXH9l7d6AMXja5Npwa3eXrQqDyxrYtFF17ZG4TxjdoeTwRTq6gh509V3AnanOoNQsUAcU8wEzEmrqe6KfiNq8FFnoa0WI703N22mn596LWi_X6UGc5Kl3uydIdNb2h71i4y8rVlcvc1zDps\" alt=\"\" width=\"350\" height=\"263\"\/><figcaption><a href=\"https:\/\/www.seeedstudio.com\/Coral-Dev-Board-p-2900.html\">Coral dev board<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>For on-device machine learning, check <a href=\"https:\/\/www.tensorflow.org\/lite\">TensorFlow Lite<\/a><\/p>\n\n\n\n<p>The key features of TensorFlow Lite are optimized for on-device machine learning, with a focus on latency, privacy, connectivity, size, and power consumption.<\/p>\n\n\n\n<p>Get started quickly with <a href=\"https:\/\/www.seeedstudio.com\/ReTerminal-with-CM4-p-4904.html\">reTerminal<\/a> (powered by Raspberry Pi CM4)&nbsp;by Seeed.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/H-XKR_3gXrKzrBAmM50KkrICifKbEQg9WeHSfx07HlRI3cgOBzwzDPbFIFbKgjQOsn7TlCzOwvZpLVQRKWF0KZtVdPLjs8SnnBIDlE1b4NKuA51R-FnF51Kcs7OJnFp1TcY9K8d-UeM5RBScXV3qnpE\" alt=\"\" width=\"700\" height=\"525\"\/><\/figure><\/div>\n\n\n\n<p>To learn more about reTerminal, you can check out Seeed\u2019s <a href=\"https:\/\/wiki.seeedstudio.com\/reTerminal_ML_TFLite\/\">Wiki page<\/a>.&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Computer Vision Use Case<\/h2>\n\n\n\n<h2 class=\"wp-block-heading\">Retail &#8211; <a href=\"https:\/\/www.zenus.ai\/\">Zenus&nbsp;<\/a><\/h2>\n\n\n\n<p>Zenus offers fully integrated solutions for retail brands using computer vision. The Zenus Smart Camera captures data such as real-time foot traffic, demographics, and sentiment analysis.&nbsp;<\/p>\n\n\n\n<p>Foot traffic information will be used to compute conversion rates and predict sales. Heat maps like this can help with understanding where to put high-profit margin products.&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/ALyVpt74pcEqhcJocfCQQJYrSmr7MNkD6xJjUEBf8de8Pb4QyxPsdNZ-v61e8jLIvSBSBQiRmdanZefo_g9aCrVY_VywE5L-2BFWk1Hp1dmm3Zfk59hkxZTGDpLyfCrFAKup9WQl0Ya-lr9h5S_vGNo\" alt=\"\" width=\"347\" height=\"245\"\/><\/figure><\/div>\n\n\n\n<p>Demographics and sentiment analysis can help the store understand whether they are reaching the correct target audience for the respective products and whether their marketing techniques are working.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/-4JdV3AIijUD9x9h_EBC0IBQRKbfggbXAvEQTgQYZMDd_P2SnXZbxlYhYM4s5b82iaoOCPkZovIVJHdZBvqHPAiruEdU8br9li89o1yIIl-Kpa8kB1LjIo1RNQss52YxfheD9-itGqi9wDqf5PAVc1c\" alt=\"\" width=\"315\" height=\"286\"\/><\/figure><\/div>\n\n\n\n<p>With all these, the retail store is able to further improve sales and increase customer engagement. Read more about how Zenus helps the retail industry using computer vision on Seeed\u2019s <a href=\"https:\/\/www.seeedstudio.com\/blog\/2021\/12\/03\/sentiment-analysis-in-the-retail-industry-becomes-more-accessible\/\">blog post<\/a>.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Agriculture &#8211; <a href=\"http:\/\/intflow.ai\/\">Intflow<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/tvpf4j3Hu8MHElsrAC5otaBLDTLf4dyPbiseL9Ymdzks7sT6xEWMPYkyO2Dd7CEzXtXNFpcklErEZke30MWn1op6bXo1t2Oa027UVjMBOMy60B1nS5RVb58LHDuq8fhQhN9WuJ-qMiDK-UDJr9Ozqf0\" alt=\"\" width=\"800\" height=\"216\"\/><\/figure><\/div>\n\n\n\n<p>Intflow\u2019s EdgeFarm is an AI solution that perceives livestock injuries and diseases to help farmers manage and optimize livestock productivity. EdgeFarm collects biometric data of the livestock and uses computer vision to capture real-time data such as eating habits and weight gain in a day of the livestock. Based on this data, an action list will be provided to improve the productivity and efficiency of rearing livestock in the 21st century.<\/p>\n\n\n\n<p>Read more on how Intflow\u2019s EdgeFarm works on Seeed\u2019s <a href=\"https:\/\/www.seeedstudio.com\/blog\/2022\/07\/01\/edge-ai-at-the-farm-precise-livestock-management-helps-farmers-optimize-livestock-productivity\/\">blog post<\/a>.&nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Smart Home &#8211; <a href=\"https:\/\/github.com\/blakeblackshear\/frigate\">Frigate<\/a><\/h3>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/CwnqIcVhX9heFrCVykHH3wpsAHQRG6wA_ZCzXRxLbuSgfQx_RyECvwq2k9siGhSwPpakmVlHKFzw3nYjqI5jOivFqUxzOGwdMqsaHu4LyIC3jB-nH6RcH9cUxLk73wqvDkZhyuTP8Aiqk_HLeYw3ax8\" alt=\"\" width=\"400\" height=\"800\"\/><\/figure><\/div>\n\n\n\n<p>Frigate is an open-source network video recorder that can be implemented into your Home Assistant system. It uses OpenCV and TensorFlow to perform real-time object detection. You are able to view real-time visual footage of what is outside your door with Frigate.&nbsp;<\/p>\n\n\n\n<p>The best thing is you can set it up yourself all for free, and you don\u2019t have to pay any cloud management fee or any recurring security systems fee. Learn more on how to do it at Seeed\u2019s <a href=\"https:\/\/www.seeedstudio.com\/blog\/2022\/07\/14\/build-an-nvr-camera-system-with-frigate-monitor-your-security-cameras-with-locally-processed-ai\/\">blog post<\/a>.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Hardware for Computer Vision<\/h2>\n\n\n\n<h4 class=\"wp-block-heading\"><a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\">reComputer Nano\/NX: real world AI at the Edge, starts from $199<\/a><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">Built with <strong><a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-H0-p-5335.html\">Jetson Nano<\/a><\/strong> 4GB\/ <strong><a href=\"https:\/\/www.seeedstudio.com\/reServer-Jetson-50-1-H4-p-5338.html\">Xavier NX 8GB\/16GB<\/a><\/strong><\/h4>\n\n\n\n<p>reComputer series for Jetson are compact edge computers built with NVIDIA advanced AI embedded systems: J10 (<a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Nano-Module-p-4417.html?queryID=2003aa316accbe28cb3f7cf2ffb83943&amp;objectID=4417&amp;indexName=bazaar_retailer_products\">Nano 4GB<\/a>) and J20 (Jetson <a href=\"https:\/\/www.seeedstudio.com\/NVIDIA-Jetson-Xavier-NX-Module-p-4421.html?queryID=fa33abfaf6f67f95a4c01b60263d2793&amp;objectID=4421&amp;indexName=bazaar_retailer_products\">Xavier NX<\/a> 8GB and Jetson Xavier 16GB). With rich extension modules, industrial peripherals, and thermal management, reComputer for Jetson is ready to help you accelerate and scale the next-gen AI product by deploying popular DNN models and ML frameworks to the edge and inferencing with high performance.<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img decoding=\"async\" width=\"305px;\" height=\"190px;\" src=\"https:\/\/lh4.googleusercontent.com\/v9rwNil04lG8puNPx7YxhOcSBllg4Y2tNgF2b5Hn7M_zUs9MSZNEdEi2EhsXqLcWKX14kvHUL-NK0rutzb4U865zulq-S3c6wXpRvL3CWiz5xH3cBcCZf5AL_RIhjay7L38w2H7A-34y\"><\/p>\n\n\n\n<div class=\"wp-block-blockspare-blockspare-buttons blockspare-52a026be-cd44-4 blockspare-block-button-wrap\"><style>.blockspare-52a026be-cd44-4 .blockspare-block-button{text-align:center;margin-top:30px;margin-bottom:30px}.blockspare-52a026be-cd44-4 .blockspare-block-button span{color:#ffffff;border-width:1px;font-size:16px;font-family:Default}.blockspare-52a026be-cd44-4.wp-block-blockspare-blockspare-buttons .blockspare-block-button .blockspare-button{background-color:#097c4e}.blockspare-52a026be-cd44-4.wp-block-blockspare-blockspare-buttons .blockspare-block-button .blockspare-button:visited{background-color:#097c4e}.blockspare-52a026be-cd44-4.wp-block-blockspare-blockspare-buttons .blockspare-block-button .blockspare-button:focus{background-color:#097c4e}@media screen and (max-width:1025px){.blockspare-52a026be-cd44-4 .blockspare-block-button span{font-size:undefinedpx}}@media screen and (max-width:768px){.blockspare-52a026be-cd44-4 .blockspare-block-button span{font-size:undefinedpx}}<\/style><div class=\"blockspare-block-button\"><a href=\"https:\/\/www.seeedstudio.com\/Jetson-10-1-A0-p-5336.html\" class=\"blockspare-button blockspare-button-shape-rounded blockspare-button-size-small\"><span>Buy Now<\/span><\/a><\/div><\/div>\n\n\n\n<ul class=\"wp-block-list\"><li>Edge AI box fits into anywhere<\/li><li>Embedded Jetson Nano\/NX Module<\/li><li>Pre-installed Jetpack for easy deployment<\/li><li>Nearly the same form factor as Jetson Developer Kits, with a rich set of I\/Os<\/li><li>Stackable and expandable<\/li><\/ul>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><a href=\"https:\/\/developer.nvidia.com\/embedded\/jetson-benchmarks\">Jetson Benchmark<\/a>: Jetson Xavier NX and Jetson AGX Orin MLPerf v2.0 Results<\/h2>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td><strong>Model<\/strong><\/td><td><strong>Jetson Xavier NX<\/strong><\/td><td><strong>Jetson AGX Xavier<\/strong><\/td><td><strong>Jetson AGX Orin<\/strong><\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/models\/tlt_peoplenet\">PeopleNet<\/a><\/td><td>124<\/td><td>196<\/td><td>536<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/tao\/models\/actionrecognitionnet\">Action Recognition 2D<\/a><\/td><td>245<\/td><td>471<\/td><td>1577<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/tao\/models\/actionrecognitionnet\">Action Recognition 3D<\/a><\/td><td>21<\/td><td>32<\/td><td>105<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/tao\/models\/lprnet\">LPR Net<\/a><\/td><td>706<\/td><td>1190<\/td><td>4118<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/tao\/models\/dashcamnet\">Dashcam Net<\/a><\/td><td>425<\/td><td>671<\/td><td>1908<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/tao\/models\/bodyposenet\">Bodypose Net<\/a><\/td><td>105<\/td><td>172<\/td><td>559<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/nemo\/models\/stt_en_citrinet_1024\">ASR: Citrinet 1024<\/a><\/td><td>27<\/td><td>34<\/td><td>113<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/models\/bertbaseuncasedfornemo\">NLP: BERT-base<\/a><\/td><td>58<\/td><td>94<\/td><td>287<\/td><\/tr><tr><td><a href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/nemo\/models\/tts_en_e2e_fastpitchhifigan\">TTS: Fastpitch-HifiGAN<\/a><\/td><td>7<\/td><td>9<\/td><td>42<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Get started with Computer Vision with easy-to-use developer tools. <\/h3>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/github.com\/Seeed-Studio\/node-red-contrib-ml\">Edge AI No Code Vision Tool,<\/a> Seeed latest open-source project for deploying AI applications within 3 nodes.\u00a0<\/li><li><a href=\"https:\/\/developer.nvidia.com\/deepstream-sdk\">NVIDIA DeepStream SDK<\/a> delivers a complete streaming analytics toolkit for AI-based multi-sensor processing and video and image understanding on Jetson.<\/li><li><a href=\"https:\/\/developer.nvidia.com\/tao-toolkit\">NVIDIA TAO Tool Kit<\/a>, built on TensorFlow and PyTorch, is a low-code version of the NVIDIA TAO framework that accelerates the model training\u00a0<\/li><li><a href=\"https:\/\/alwaysai.co\/blog\/getting-started-with-the-jetson-nano-using-alwaysai\">alwaysAI<\/a>: build, train, and deploy computer vision applications directly at the edge of reComputer. Get free access to 100+ pre-trained Computer Vision Models and train custom AI models in the cloud in a few clicks via enterprise subscription. Check out our <a href=\"https:\/\/wiki.seeedstudio.com\/alwaysAI-Jetson-Getting-Started\/#object-detection-on-pre-loaded-video-file\">wiki<\/a> guide to get started with alwaysAI.<\/li><li><a href=\"https:\/\/www.edgeimpulse.com\/\">Edge Impulse<\/a>: the easiest embedded machine learning pipeline for deploying audio, classification, and object detection applications at the edge with zero dependencies on the cloud.\u00a0<\/li><li><a href=\"https:\/\/blog.roboflow.com\/deploy-to-nvidia-jetson\/\">Roboflow<\/a> provides tools to convert raw images into a custom-trained computer vision model of object detection and classification and deploy the model for use in applications. See the <a href=\"https:\/\/docs.roboflow.com\/inference\/nvidia-jetson\">full documentation<\/a> for deploying to NVIDIA Jetson with Roboflow.<\/li><li><a href=\"https:\/\/github.com\/ultralytics\/yolov5\">YOLOv5 by Ultralytics<\/a>: use transfer learning to realize few-shot object detection with YOLOv5, which needs only a very few training samples. See our step-by-step <a href=\"https:\/\/wiki.seeedstudio.com\/YOLOv5-Object-Detection-Jetson\/\">wiki<\/a> tutorials<\/li><li><a href=\"https:\/\/deci.ai\/blog\/jetson-machine-learning-inference\/\">Deci<\/a>: optimize your models on NVIDIA Jetson Nano. Check the <a href=\"https:\/\/info.deci.ai\/benchmark-optimize-runtime-performance-nvidia-jetson\">webinar<\/a> at Deci of Automatically Benchmark and Optimize Runtime Performance on NVIDIA Jetson Nano and Xavier NX Devices<\/li><\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Computer vision is what enables computers and systems to derive meaningful information from digital images,<\/p>\n","protected":false},"author":200,"featured_media":70541,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_lmt_disableupdate":"","_lmt_disable":"","_price":"","_stock":"","_tribe_ticket_header":"","_tribe_default_ticket_provider":"","_tribe_ticket_capacity":"0","_ticket_start_date":"","_ticket_end_date":"","_tribe_ticket_show_description":"","_tribe_ticket_show_not_going":false,"_tribe_ticket_use_global_stock":"","_tribe_ticket_global_stock_level":"","_global_stock_mode":"","_global_stock_cap":"","_tribe_rsvp_for_event":"","_tribe_ticket_going_count":"","_tribe_ticket_not_going_count":"","_tribe_tickets_list":"[]","_tribe_ticket_has_attendee_info_fields":false,"iawp_total_views":0,"footnotes":""},"categories":[4393],"tags":[3254],"class_list":["post-70538","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech","tag-computer-vision"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Computer Vision 101: what is computer vision, and how to implement CV with edge devices? - Latest News from Seeed Studio<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Computer Vision 101: what is computer vision, and how to implement CV with edge devices? - Latest News from Seeed Studio\" \/>\n<meta property=\"og:description\" content=\"Computer vision is what enables computers and systems to derive meaningful information from digital images,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/\" \/>\n<meta property=\"og:site_name\" content=\"Latest News from Seeed Studio\" \/>\n<meta property=\"article:published_time\" content=\"2022-08-19T12:42:26+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-08-22T06:55:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png\" \/>\n\t<meta property=\"og:image:width\" content=\"891\" \/>\n\t<meta property=\"og:image:height\" content=\"458\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Elaine Wu\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Elaine Wu\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/\",\"url\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/\",\"name\":\"Computer Vision 101: what is computer vision, and how to implement CV with edge devices? - Latest News from Seeed Studio\",\"isPartOf\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png\",\"datePublished\":\"2022-08-19T12:42:26+00:00\",\"dateModified\":\"2022-08-22T06:55:34+00:00\",\"author\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#primaryimage\",\"url\":\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png\",\"contentUrl\":\"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png\",\"width\":891,\"height\":458,\"caption\":\"Computer Vision 101: what is computer vision, and how to implement CV with edge devices?\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.seeedstudio.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Computer Vision 101: what is computer vision, and how to implement CV with edge devices?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#website\",\"url\":\"https:\/\/www.seeedstudio.com\/blog\/\",\"name\":\"Latest News from Seeed Studio\",\"description\":\"Emerging IoT, AI and Autonomous Applications on the Edge\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.seeedstudio.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11\",\"name\":\"Elaine Wu\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g\",\"caption\":\"Elaine Wu\"},\"description\":\"Head of AI Robotics @seeed Every day holds new magic \u2728 on ne sait jamais\u2601\ufe0f\",\"sameAs\":[\"https:\/\/www.linkedin.com\/in\/elaine1994\/\"],\"url\":\"https:\/\/www.seeedstudio.com\/blog\/author\/elaine\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Computer Vision 101: what is computer vision, and how to implement CV with edge devices? - Latest News from Seeed Studio","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/","og_locale":"en_US","og_type":"article","og_title":"Computer Vision 101: what is computer vision, and how to implement CV with edge devices? - Latest News from Seeed Studio","og_description":"Computer vision is what enables computers and systems to derive meaningful information from digital images,","og_url":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/","og_site_name":"Latest News from Seeed Studio","article_published_time":"2022-08-19T12:42:26+00:00","article_modified_time":"2022-08-22T06:55:34+00:00","og_image":[{"width":891,"height":458,"url":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png","type":"image\/png"}],"author":"Elaine Wu","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Elaine Wu","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/","url":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/","name":"Computer Vision 101: what is computer vision, and how to implement CV with edge devices? - Latest News from Seeed Studio","isPartOf":{"@id":"https:\/\/www.seeedstudio.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#primaryimage"},"image":{"@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#primaryimage"},"thumbnailUrl":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png","datePublished":"2022-08-19T12:42:26+00:00","dateModified":"2022-08-22T06:55:34+00:00","author":{"@id":"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11"},"breadcrumb":{"@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#primaryimage","url":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png","contentUrl":"https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png","width":891,"height":458,"caption":"Computer Vision 101: what is computer vision, and how to implement CV with edge devices?"},{"@type":"BreadcrumbList","@id":"https:\/\/www.seeedstudio.com\/blog\/2022\/08\/19\/computer-vision-101-what-is-computer-vision-and-how-to-implement-cv-with-edge-devices\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.seeedstudio.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Computer Vision 101: what is computer vision, and how to implement CV with edge devices?"}]},{"@type":"WebSite","@id":"https:\/\/www.seeedstudio.com\/blog\/#website","url":"https:\/\/www.seeedstudio.com\/blog\/","name":"Latest News from Seeed Studio","description":"Emerging IoT, AI and Autonomous Applications on the Edge","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.seeedstudio.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/61c04bed5bbe2d098f04195c6e48fb11","name":"Elaine Wu","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.seeedstudio.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/184af8ef71f0d6b64c276f9bb38b992e?s=96&r=g","caption":"Elaine Wu"},"description":"Head of AI Robotics @seeed Every day holds new magic \u2728 on ne sait jamais\u2601\ufe0f","sameAs":["https:\/\/www.linkedin.com\/in\/elaine1994\/"],"url":"https:\/\/www.seeedstudio.com\/blog\/author\/elaine\/"}]}},"modified_by":"Lily","views":6056,"featured_image_urls":{"full":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",891,458,false],"thumbnail":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19-80x80.png",80,80,true],"medium":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19-300x154.png",300,154,true],"medium_large":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19-768x395.png",640,329,true],"large":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",640,329,false],"1536x1536":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",891,458,false],"2048x2048":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",891,458,false],"visody_icon":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",32,16,false],"magazine-7-slider-full":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",891,458,false],"magazine-7-slider-center":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",891,458,false],"magazine-7-featured":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19.png",891,458,false],"magazine-7-medium":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19-720x380.png",720,380,true],"magazine-7-medium-square":["https:\/\/www.seeedstudio.com\/blog\/wp-content\/uploads\/2022\/08\/image-19-675x450.png",675,450,true]},"author_info":{"display_name":"Elaine Wu","author_link":"https:\/\/www.seeedstudio.com\/blog\/author\/elaine\/"},"category_info":"<a href=\"https:\/\/www.seeedstudio.com\/blog\/category\/tech\/\" rel=\"category tag\">Tech<\/a>","tag_info":"Tech","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts\/70538","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/users\/200"}],"replies":[{"embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/comments?post=70538"}],"version-history":[{"count":10,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts\/70538\/revisions"}],"predecessor-version":[{"id":70557,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/posts\/70538\/revisions\/70557"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/media\/70541"}],"wp:attachment":[{"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/media?parent=70538"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/categories?post=70538"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.seeedstudio.com\/blog\/wp-json\/wp\/v2\/tags?post=70538"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}