{"id":4662,"date":"2018-10-19T15:13:07","date_gmt":"2018-10-19T19:13:07","guid":{"rendered":"https:\/\/dronebotworkshop.com\/?p=4662"},"modified":"2023-04-12T12:37:47","modified_gmt":"2023-04-12T16:37:47","slug":"pixy2-camera","status":"publish","type":"post","link":"https:\/\/dronebotworkshop.com\/pixy2-camera\/","title":{"rendered":"Pixy2 Camera – Object Recognition with Arduino & Raspberry Pi"},"content":{"rendered":"\n
<\/a> <\/a><\/p>\n The ability to recognize objects has been a goal of computer scientists and AI researchers for decades. In the past, this has required large computers running sophisticated software which has kept the technologies involved confined to labs and research departments with large budgets.<\/span><\/p>\n Flash forward to today and object recognition has become mainstream. Google and Facebook can identify faces from photographs and tag the pictures, advertising billboards can (in a somewhat controversial fashion) <\/span>identify a person’s gender and age to cater ads to them based upon the results<\/span><\/a> and <\/span>solve Rubik\u2019s Cubes<\/span><\/a>.<\/span><\/p>\n Object recognition and computer vision technology is now available for experimenters as well, with several kits and cameras with various capabilities available.<\/span><\/p>\n Today we will examine one of these offerings, the Pixy2 Camera.<\/span><\/p>\n The <\/span>Pixy2<\/span><\/a> is a small camera designed for object recognition, line tracking, and simple barcode reading. \u00a0The device I used for this article and the video was supplied courtesy of <\/span>DFRobot<\/span><\/a>.<\/span><\/p>\n <\/p>\n The Pixy2 is capable of recognizing seven distinct objects based upon their shape and color (or hue). \u00a0Each of these objects is assigned a unique \u201csignature\u201d.<\/span><\/p>\n The camera also has algorithms for line following. Unlike traditional line followers, the Pixy2 can \u201clook ahead\u201d to determine when the line it is following (called a \u201cvector\u201d) is going to turn or cross another line (i.e. an \u201cintersection\u201d). \u00a0This is similar to the method you use when you walk, bike or drive down the street – you look ahead to anticipate any turns or stops you\u2019ll need to make.<\/span><\/p>\n <\/p>\n The Pixy2 can also detect 16 simple barcodes. You can use these as visual indicators for your robot project.<\/span><\/p>\n The Pixy2 is a self-contained unit, it\u2019s onboard processor takes care of all the \u201cheavy lifting\u201d – recognizing specific objects and filtering out extraneous objects. \u00a0This frees up your microcomputer or microcontroller to perform other operations, using the Pixy2 as an \u201cintelligent sensor\u201d.<\/span><\/p>\n The origin of the Pixy2 can be traced back to the CMUcam, a device developed at the <\/span>Robotics Institute at Carnegie Mellon University<\/span><\/a> in 1999. \u00a0This device was one of the world’s first affordable vision sensors and over the years it has gone through <\/span>several different iterations<\/span><\/a>.<\/span><\/p>\n <\/p>\n The fifth version of the camera (CMUcam5) was a joint effort of the robotics team at CMU and <\/span>Charmed Labs<\/span><\/a>. The device was funded in 2014 by a successful Kickstarter campaign under the name Pixy Cam.<\/span><\/p>\n The original Pixy Cam was a breakthrough in both capability and price. \u00a0It could be used with an Arduino, Raspberry Pi or on its own as a computer peripheral. <\/span><\/p>\n The Pixy2 is an improvement on the original. It is smaller, faster and has a fixed focus lens. It is also much more powerful. Unlike the original Pixy Cam, the Pixy2 was not a crowdfunded project, Charmed Labs designed and built the Pixy2 using the profits made from sales of the original Pixy Cam.<\/span><\/p>\n The Pixy2 is the latest (as of this writing) version of the Pixy Cam. <\/span><\/p>\n <\/p>\n In addition to having all of the features of the original Pixy Cam the Pixy2 has these additional functions:<\/span><\/p>\n The Pixy2 has a variety of connections that can be used to connect it to a computer, microcomputer or microcontroller.<\/span><\/p>\n The device uses a 5-volt power supply that can be derived from the USB connection. It can also be powered by a regulated voltage source of 5 volts or an unregulated source of 6 to 10 volts (the device has an onboard voltage regulator). It consumes about 140mA at 5 volts.<\/span><\/p>\n The Pixy2 also has connections for two servo motors. This is for use with the optional pan and tilt assembly that can be attached to the Pixy2. <\/span><\/p>\n <\/p>\n One nice thing about the Pixy2 (and there are many \u201cnice things\u201d about it) is that it comes complete with the following accessories:<\/span><\/p>\n The camera and the accessories all come packaged in a small box. No documentation is included in the box, instead a link to the Pixy2 website is printed on the package interior. <\/span><\/p>\n <\/p>\n A wealth of information is found on the <\/span>Pixy2 Wiki<\/span><\/a>. You\u2019ll find information regarding hooking up the camera to Arduino, Raspberry Pi and BeagleBone Black, assembling the pan and tilt mount and of course learning to train and use the Pixy2.<\/span><\/p>\n The Wiki also has a link to the <\/span>Pixy2 Downloads<\/span><\/a> page. This is where you can download sample code and libraries for the camera, as well as firmware updates for the Pixy2 itself. \u00a0You can also download images of the simple barcodes that you can use with the camera.<\/span><\/p>\n Charmed Labs also maintains a <\/span>Pixy2 page on GitHub<\/span><\/a> with scripts and binaries.<\/span><\/p>\n One software product you will certainly want to download is called PixyMon. In fact, it\u2019s probably the very first thing you should do after you open up the box with your Pixy2.<\/span><\/p>\n PixyMon is software that runs on your computer and allows you to configure the Pixy2, train it to recognize objects and monitor and debug your own programs. It also provides a monitor that allows you to see that the camera is \u201clooking at\u201d.<\/span><\/p>\n PixyMon is available for Windows, Linux, and Mac OS X. <\/span><\/p>\n The first step is of course to download the version of PixyMon suitable for your computer, you can get that on the <\/span>Pixy2 Download page<\/span><\/a>.<\/span><\/p>\n The wiki provides detailed instructions for installing PixyMon, the following links will take you to the specific instructions for your computer:<\/span><\/p>\n Once you have installed PixyMon connect the Pixy2 to the computer via the USB port. <\/span><\/p>\n Unless you have a specific reason for doing so you should use the USB cable that was supplied with your camera, cheaper USB cables often use inferior connectors and thinner wire and may not provide the Pixy2 with the proper voltage it required to operate. This is especially important if you are using the optional pan and tilt servos as they require additional current.<\/span><\/p>\n The first time you connect the Pixy2 to your computer and start PixyMon you may be prompted to download and install a firmware update for your camera. Go ahead and do that so that you have the latest version of the code.<\/span><\/p>\n Windows users may also need to wait while some drivers for the Pixy2 USB connection are installed. This will only take a minute or so and only needs to be done once.<\/span><\/p>\n After installing any updates and drivers PixyMon will open.<\/span><\/p>\n <\/p>\n When PixyMon is opened it will default to the Color Connected Components (CCC) mode. In this mode PixyMon will display the video image from the Pixy2 along with outlines and coordinates of any objects it has been trained to detect.<\/span><\/p>\n Of course, when you first use the program with a new Pixy2 there won\u2019t be any objects outlined as it hasn\u2019t been trained yet! We will address this shortly.<\/span><\/p>\n Aside from the rather obvious display screen, PixyMon has a number of features accessible using the menu at the top of the display. \u00a0These menu selections change depending upon which Program Mode that the Pixy2 is currently running. There are four program modes:<\/span><\/p>\n You can use PixyMon to put the Pixy2 into any of the above program modes. You can also change the Pixy2 program mode programmatically within your sketch or program.<\/span><\/p>\n PixyMon can be used by itself to control the Pixy2. It can also be used in tandem with a microcontroller connected to one of the Pixy2 I\/O connections (i.e. I2C, SPI). \u00a0This allows you to use PixyMon as a monitor or as a debug tool, very handy if your Arduino sketch does not perform the way you intended it to.<\/span><\/p>\n PixYmon also allows you to configure the parameters on the Pixy2. Settings made using PixyMon are saved in non-volatile memory on the camera.<\/span><\/p>\n You can also use PixyMon to save and recall different configurations. So you can train the Pixy2 to recognize a number of signatures, save them and then wipe the Pixy2 signatures on the device and retrain it on other objects. Later you can load the saved configuration back into your Pixy2. This allows you to work with several groups of object signatures.<\/span><\/p>\n Of course, the main feature of the Pixy2 is its ability to recognize and track objects. Let\u2019s take a look at how this is accomplished.<\/span><\/p>\n The Pixy2 uses a hue or color-based filtering algorithm to detect objects. \u00a0So objects you wish to detect should have a distinct color. You can \u201cfine-tune\u201d the settings to some degree but it is difficult to distinguish two objects with the same color.<\/span><\/p>\n In addition to the objects hue, the Pixy2 also uses a \u201cregion growing algorithm\u201d to distinguish an object.<\/span><\/p>\n Training the Pixy2 to detect an object is done by assigning the object a \u201ccolor signature\u201d. The device can be trained to memorize up to seven color signatures.<\/span><\/p>\n You can also train the Pixy2 to remember up to seven additional \u201ccolor codes\u201d. A color code consists of two colors, a couple of pieces of colored tape can make a good color signature. This can be useful in robotics applications, you can place \u201ccolor signature signs\u201d at specific locations to allow your robot to take action once it \u201csees\u201d them.<\/span><\/p>\n There are two ways to train your Pixy2 to recognize objects – manually and with PixyMon. \u00a0All object recognition is performed in Color Connected Components mode.<\/span><\/p>\n The Pixy2 is capable of being trained in a stand-alone mode without being connected to any computer or microcontroller. \u00a0All you need to do is provide a source of power for the camera.<\/span><\/p>\n The push-button on the top of the Pixy2 is used along with the RGB LED to train the device.<\/span><\/p>\n Power up the Pixy2 and observe the status of the RGB LED, located near the bottom on the front of the camera. This indicator is the key to training the device manually.<\/span><\/p>\n When the camera is powered up the RGB LED will go through a series of flashes. This is the initialization sequence and you\u2019ll need to wait while it runs (it only takes a few seconds). After initialization the indicator will turn off. You are now ready to train the Pixy2.<\/span><\/p>\n Find a suitable object to train the Pixy2 to detect. A good candidate object will have a distinct color, I used a yellow golf ball as my first object and it worked pretty well. \u00a0Place the object between 20 – 50 cm (6 – 20 inches) in front of the camera lens.<\/span><\/p>\n While you can train the Pixy2 in stand-alone mode it is a good idea to have PixyMon running when you are new to setting color signatures. The monitor will make it easier to be sure you have your camera locking onto the correct object. \u00a0Once you become more adept at training the device you can dispense with PixyMon and do your training in stand-alone mode.<\/span><\/p>\n So with your object in place and PixyMon running to monitor your progress, it\u2019s time to train the Pixy2 to detect its first object!<\/span><\/p>\n Start by pressing down and holding the push button. After about a second the RGB LED should glow white. Continue to hold down the button until it glows red, then immediately release the push button.<\/span><\/p>\n The red indicates that you are setting the color signature for signature number 1. If you were to continue holding down the button it will cycle through seven different colors, each one representing one of the color signatures.<\/span><\/p>\n <\/p>\n Once you release the push button for signature number 1 (or any of the other six signatures) the Pixy2 will be in \u201clight pipe\u201d mode. The RGB LED will now glow in a color that approximates the color of the object you are trying to train the camera to recognize. <\/span><\/p>\n Observe the RGB LED and the video screen on PixyMon. <\/span><\/p>\n When the object is picked up by the Pixy2 you will see a grid pattern engulfing it. Move the object around until the grid covers it completely, or as much as possible, without picking up on extraneous objects (like your fingers for example). \u00a0At that point, the RGB LED should be illuminated in a color similar to the target object.<\/span><\/p>\n Once you are satisfied that you are locked onto the object press and release the push button. The object will now be assigned to the color signature you were training it for.<\/span><\/p>\n You have now trained the Pixy2 to recognize its first object!<\/span><\/p>\n Another way to train the Pixy2 to recognize objects and assign signatures to them is to use PixyMon.<\/span><\/p>\n Once again you\u2019ll need to place the object you want to recognize in front of the camera. In PixyMon be sure you are in Color Connected Components mode, which is the default mode when you first start the program.<\/span><\/p>\n Once the object is visible in your monitor go to the <\/span>Action <\/span><\/i>menu and choose one of the \u201cSet Signature\u201d selections, i.e. <\/span>Set Signature 1<\/span><\/i>.<\/span><\/p>\n Now use your mouse and click and drag to outline the object you are trying to recognize.<\/span><\/p>\n <\/p>\n Once you have selected the area, release the mouse button. PixyMon will now have learned the object. \u00a0It\u2019s as simple as that!<\/span><\/p>\n When you train the Pixy2 to recognize an object it will be displayed in PixyMon with a label that indicates its signature number. Fore example, the object assigned to signature number 1 will have \u201cs=1\u201d printed in its display block.<\/span><\/p>\nIntroduction <\/span><\/h2>\n
The Pixy2 Camera<\/span><\/h2>\n
Pixy2 History<\/span><\/h3>\n
Pixy2 Features<\/span><\/h3>\n
\n
\n
\n
Pixy2 Wiki<\/span><\/h3>\n
PixyMon<\/span><\/h2>\n
Installing PixyMon<\/span><\/h3>\n
\n
Using PixyMon<\/span><\/h3>\n
\n
Object Recognition<\/span><\/h2>\n
Capturing Signatures Manually<\/span><\/h3>\n
Capturing Signatures with PixyMon<\/span><\/h3>\n
Labels and Fine Tuning<\/span><\/h3>\n