Table of Contents
Today, we will look at two Artificial Intelligence (AI) Cameras – the Raspberry Pi AI Camera and the ArduCam PINSIGHT AI Camera. We’ll learn exactly what an “AI Camera” is and how these two devices can be used to add artificial intelligence to your next Raspberry Pi project.
Introduction
The first official Raspberry Pi Camera Module was introduced in 2013. Since then, cameras have become an essential accessory, forming the basis of many entertaining and practical projects.
With the explosion of artificial intelligence over the last couple of years, there has been a lot of interest in using the Raspberry Pi board and camera combination for tasks such as image recognition and object detection.
Typically, these AI projects capture video with the camera and then process it using the Raspberry Pi. This method tends to consume most of the Raspberry Pi’s resources, leaving the Pi with little memory or processing power to perform other tasks.
But there is another method of doing this, one that offloads the AI tasks to the camera itself, freeing up the Pi for other tasks. That method is to use an “AI Camera.”
AI Cameras
Although “AI Cameras” might sound like a new camera category, they have actually been around for a while.
In the early 2000s, “Smart Cameras” appeared for industrial machine vision tasks. These cameras had basic onboard processors (often DSPs) that ran image processing algorithms for tasks like reading barcodes or simple pattern recognition.
A decade later, as smartphones advanced, camera apps used increasingly powerful mobile GPUs and specialized chips for face detection, scene recognition, and basic AI-driven effects.
Small camera modules suitable for experimenters like us began surfacing about ten years ago. We have looked at the PixyCam and DF Robot HuskyLens camera modules in the DroneBot Workshop; these are earlier versions of AI camera modules.
AI cameras typically employ a range of technologies, including:
- Computer Vision – Enables cameras to interpret and understand visual data from images and videos.
- Machine Learning – Allows cameras to learn from data and enhance their performance over time.
- Deep Learning – A branch of machine learning that employs neural networks to analyze data.
- Image Processing – Allows cameras to enhance, filter, and manipulate images in real-time.
Standard functions of AI cameras include:
- Object Detection – Identifies and tracks objects within a scene.
- Facial Recognition – Identifies and verifies individuals based on their facial features.
- Gesture Analysis – Interprets human gestures and movements.
- Image Classification – Classifies images into predefined categories.
A key feature of any AI Camera is that all AI functions are integrated into the module. This technique is known as “Edge Computing,” where sensors and I/O devices have their own intelligence, allowing the host controller to focus on other things.
Raspberry Pi 5
I’ll be testing both cameras using identical hardware configurations:
- Raspberry Pi 5 with 8GB of RAM
- Raspberry Pi 27-watt power supply
- The 64-bit version of Raspberry Pi OS on a 64 GB MicroSD card
To simplify testing, I’ll mount each camera on a small tripod. The ArduCam PiNSIGHT camera has a ¼-inch thread to accommodate a standard photo tripod or monopod. I had to rig up a fixture using some perfboard and a surplus GoPro mount adapter for the Raspberry Pi AI Camera.
Because these cameras do the AI processing onboard, they can be used with older Raspberry Pi boards, such as the Pi 4 and Pi Zero 2 W.
Raspberry Pi AI Camera
The first AI Camera we will be looking at is from Raspberry Pi itself. They gave it the catchy name “Raspberry Pi AI Camera.”
The camera itself is about as unassuming as its name, but looks can be deceiving. While this may look like a standard Raspberry Pi camera module and has the exact mounting dimensions, it is actually a tiny AI powerhouse.
As with the other official Raspberry Pi Cameras, this device connects through the Raspberry Pi CSI video connector. The camera uses this cable to send both video and tensor metadata. The AI Camera is packaged with two CSI cables, both the same length but with different-sized connectors to accommodate every Raspberry Pi model (mini for Pi 5 and Zero, standard for the rest).
A tool for adjusting the camera’s manual focus is also included. The focus can be set from 20 cm to infinity. The module measures 25 × 24 × 11.9 mm and has the same mounting holes as the standard Pi camera.
Raspberry Pi AI Camera Hardware
The heart (or, perhaps more accurately, the brains) of the Raspberry Pi AI Camera is the Sony IMX500 imaging sensor. This sensor combines an image sensor with a powerful DSP and dedicated 8MB on-chip SRAM to enable high-speed edge AI processing.
The AI Camera also has a Raspberry Pi RP2040 microcontroller onboard and an additional 16MB of flash memory configured as a cache. The RP2040 manages the memory cache and transfers firmware files between the Pi and the IMX500’s internal memory.
To use the camera’s AI features, you must first upload the neural network model firmware to the IMX500 memory cache. This procedure can take a few minutes. Once the firmware is loaded, there is no delay in using the camera.
When the IMX500 sensor starts streaming video, it functions similarly to the Raspberry Pi Camera Module v3 by converting the raw Bayer data to RGB and performing any necessary cropping or resizing in real-time. Then, those frames are passed to the IMX500’s built-in accelerator for neural network processing, and both the output results and the Bayer frames are subsequently transferred to the Raspberry Pi via the CSI-2 camera interface.
Setting up the AI Camera
The Raspberry Pi AI Camera is connected using the same CSI port you would use for a standard camera. Ensure you have the cable oriented correctly; otherwise, installation will be a breeze.
You’ll need to have the latest version of the Raspberry PI OS to use the AI Camera, so open up a terminal and type the following to bring everything up to date:
1 |
sudo apt update && sudo apt full-upgrade |
Next, you must install the firmware files for the Sony IMX500 sensor. You can do that with this command:
1 |
sudo apt install imx500-all |
This one command performs a multitude of functions:
- Installs the IMX500 sensor’s firmware files (imx500_loader.fpk and imx500_firmware.fpk) in /lib/firmware/, enabling sensor operation.
- Places multiple neural network model firmware files into /usr/share/imx500-models/.
- Adds the IMX500 post-processing software stages to rpicam-apps.
- Sets up the Sony network model packaging tools.
After the firmware files are installed, you will need to reboot the Raspberry Pi:
1 |
sudo reboot |
You have now installed the software required to run the AI Camera.
Testing the Raspberry Pi AI Camera – RPICAM-APPS
One of the things that happened during the firmware installation was that the existing RPICAM was modified to include post-processing for AI camera data. Let’s run a few commands to test the camera.
MobileNet SSD Object Detection
MobileNet is a family of convolutional neural networks (CNNs) designed for mobile devices. They are used for image classification, object detection, and semantic segmentation. MobileNet is smaller and more efficient than many other CNNs and has low latency, making it well-suited for mobile and edge computing devices.
SSD (Single Shot MultiBox Detector) is a one-stage object detection technique. It works by dividing the possible bounding box outputs into a set of pre-defined default boxes. These default boxes have varying aspect ratios and scales and are associated with different locations on the feature maps.
MobileNet SSD combines the efficiency of MobileNets with the accuracy of SSD to enable robust object detection in resource-constrained environments, such as the Raspberry Pi. We can test it out with the following RPICAM command:
1 |
rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_mobilenet_ssd.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30 |
The object detection demonstration will draw bounding boxes around the objects it detects.
The firmware will be loaded into the IMX500 when you first run the command. This can take about a minute, and you will see the progress in the terminal window. Once it is loaded, it can be called almost instantly.
PoseNet Pose Detection
PoseNet is a deep learning framework capable of estimating a monocular camera’s 6 degrees of freedom (DOF) pose using a single RGB image. It functions in an end-to-end manner without requiring extra engineering or graph optimization. PoseNet can operate in real-time, both indoors and outdoors, and takes approximately 5 milliseconds to process each frame.
This RPICAM command uses PoseNet for pose detection:
1 |
rpicam-hello -t 0s --post-process-file /usr/share/rpi-camera-assets/imx500_posenet.json --viewfinder-width 1920 --viewfinder-height 1080 --framerate 30 |
Run the command and focus the camera on yourself (or someone else). You must get most of your subject’s body in the picture frame. Once you do, you can observe that the subject’s pose is superimposed upon their image.
Using OpenCV and Picamera2
Picamera2 is the libcamera-based replacement for Picamera, a Python interface to Raspberry Pi’s legacy camera stack. Picamera2 also presents an easy-to-use Python API to incorporate AI Camera data in your Python projects.
We will be using picamera2 with OpenCV. OpenCV, or Open Source Computer Vision Library, is a free software library that helps build computer vision applications. We will also install the Kuhn-Munkres Algorithm (also called the Hungarian Maximum Matching Algorithm), which finds the maximum-weight matching in an array of numbers. We will use these to repeat the demonstrations that we performed using RPICAM. Remember, doing this with Picamera2 will allow you to use the results in your Python applications.
The first step is to install the OpenCV dependencies and Munkres:
1 |
sudo apt install python3-opencv python3-munkres |
Next, we will grab some examples from the Raspberry Pi Picamera2 GitHub page. The easiest way to accomplish this is just to clone the repository, as follows:
1 |
git clone https://github.com/raspberrypi/picamera2.git |
After that is done, we can move into the IMX500 examples folder:
1 |
cd picamera2/examples/imx500 |
Here we will see five examples:
- imx500_classification_demo.py
- imx500_object_detection_demo.py
- imx500_object_detection_demo_mp.py
- imx500_pose_estimation_higherhrnet_demo.py
- imx500_segmentation_demo.py
Let’s run the Object Detection demo:
1 |
python imx500_object_detection_demo.py --model /usr/share/imx500-models/imx500_network_ssd_mobilenetv2_fpnlite_320x320_pp.rpk |
The Object Detection demo has a parameter to specify the model you are using. We are using MobileNet SSD, as we did in the earlier demonstration.
We can also repeat the pose estimation demo:
1 |
python imx500_pose_estimation_higherhrnet_demo.py |
Note how similar the examples are to the earlier ones we ran.
You can check out the documentation on the Raspberry Pi website for more information regarding the Raspberry Pi AI Camera.
Sony AITRIOS
Sony has an edge AI development platform called AITRIOS, and the Raspberry Pi AI Camera is compatible with it. You can use the tools supplied by Sony to develop your own AI models and send them to the IMX500 module in the Raspberry Pi AI Camera.
A great place to start is in the AI Tutorials section. You’ll learn how to set up your Raspberry Pi AI Camera and how to create and deploy your own AI models. You can also run tutorials using GitHub or Google Collab.
AITRIOS also has a page that links to a wealth of resources for the Raspberry Pi AI Camera; be sure to visit it. If I were you, I’d bypass the Brain Builder; it’s a commercial platform that costs about 7 thousand dollars! Scroll to the bottom of the page and look at the free Developer Toolbox resources.
With these tools and tutorials, you can deploy your own custom models to the camera and build some brilliant Raspberry Pi projects.
Arducam PiNSIGHT AI Camera
The ArduCam PiNSIGHT is a self-contained AI camera with a USB-C connector for data and power. This 12.3 MP auto-focus 4K 30 fps camera is a 4 TOPS AI vision system that can be used for applications like face recognition, fatigue detection, and anomaly detection.
Arducam PiNSIGHT AI Camera Hardware
The camera is completely enclosed, with the USB-C connector being the only interface. It has mounting posts for a Raspberry Pi 5 board, allowing you to create an “all-in-one” AI camera. A small USB-C cable is provided with the camera.
The PiNSIGHT resembles an old digital camera from the early 2000s. Its enclosure features a ¼-inch mounting thread compatible with standard photo tripods. It measures 88.5×58×10 mm.
The image sensor measures 6.287mm x 4.712 mm and has a 4056 x 3040 pixels resolution. Its default focus range is 15cm to infinity.
This sensor is paired with a Luxonis OAK-SoM module, a 4 Tera Operation Per Second (TOPS) AI vision system. This powerful module enables hardware-based encoding (H.264, H.265, MJPEG) for faster video processing and optimized power consumption.
Setting up the ArduCam PiNSIGHT AI Camera
The ArduCam PiNSIGHT AI Camera is connected to the Raspberry Pi using a USB-C cable. Be sure to plug the cable into one of the “blue” USB connectors on the Raspberry Pi, as these are faster USB 3.1 ports.
The camera can be attached to the Raspberry Pi board using the hardware provided. However, it can also operate as a stand-alone device connected via the USB-C cable.
While the ArduCam PiNSIGHT AI Camera was designed for the Raspberry Pi 5, it will also work with models 3 and 4 and the Raspberry Pi Zero 2 W.
Once you have the camera hooked up, you can start installing the software that the camera requires. The first step, as with the Raspberry Pi AI Camera, is to ensure that the Raspberry Pi OS has all the latest updates:
1 |
sudo apt update && sudo apt full-upgrade |
Next, we need to download a script to install dependencies from the ArduCam site on GitHub:
1 |
wget https://github.com/ArduCAM/arducam_ppa/releases/download/v1.0.2/pinsight_install_dependencies.sh |
When it is finished downloading, you’ll need to set the permissions on the installation script to “execute”:
1 |
chmod +x pinsight_install_dependencies.sh |
Now you can run the script to install all of the file dependencies (it may take a while to download):
1 |
./pinsight_install_dependencies.sh |
You should test your installation once the script has finished installing the files. The easiest way of doing this is to list your USB devices to see if the PiNSIGHT camera is one of them:
1 |
lsusb |
Look for an item named “Intel Movidius MyriadX” in the list of devices; this is the AI camera. If it’s there, then the camera was installed successfully.
Testing the ArduCam PiNSIGHT AI Camera
The pinsight_install_dependencies.sh script installed a couple of directories in your home folder, with some code samples in these directories. The two directories are as follows:
- depthai – Contains demonstration applications.
- depthai-python – Example Python code.
It should be noted that Luxonis has provided all of this code, and it is for various cameras and devices, not exclusively for the ArduCam PiNSIGHT. For example, several demos are intended for stereo cameras, and the PiNSIGHT is a mono camera.
depthai _demo
One example that will work is the depthai_demo. You can run it as follows:
First change into the depthai directory:
1 |
cd ./depthai |
Then run the example with Python:
1 |
python depthai_demo.py |
The demo needs to load itself into the camera, so there will be a short delay before you see anything. I observed that the camera makes a slight “click” sound when it starts.
When the demo finally begins, you will see two windows, each with the image. They are as follows:
- nnInput – This is the image that is inputted into the model after processing.
- color – This is what the camera is actually seeing.
Each image has bounding boxes labeled with the detected object.
After playing with the demo, you can end it with a Ctrl-C (twice) in the Terminal window. Closing the video displays will just cause them to reopen.
Luxonis Repository
You can grab an extensive repository of demonstration code from the Luxonis depthai-experiments repository on GitHub. Navigate back to your home directory (if you are not already there) and type the following to clone the repository to your Raspberry Pi:
1 |
git clone https://github.com/luxonis/depthai-experiments.git |
It is pretty big (about 1.5 GB), so downloading may take some time.
Once the download is complete, change into the depthai-experiments folder:
1 |
cd ./depthai-experiments |
You will see over 100 folders, each one containing an experiment. All of the code is in Python, and for most (but not all) of the examples, there is a main.py application that will run the experiment:
1 |
python main.py |
Once again, these are for a variety of cameras, not just the PiNSIGHT, so many of the examples won’t work. One that I tried that did work was the OCR example. You can run it in this folder:
1 |
cd ./depthai-experiments/gen2-ocr |
Run the main.py file:
1 |
python main.py |
When it starts, you’ll see a video window, as well as a second window that displays recognized text. The Raspberry Pi will attempt to read any text in the window. Hold up a book or magazine page to see how it works. You can look in the terminal to display the text it found and its location on the page.
With over 100 examples, you’ll have enough to keep you busy for a while! And, of course, the real value in the examples is that they illustrate how to write your own Python code for the PiNSIGHT camera.
For additional examples, you can visit the Luxonis GitHub page.
Camera Comparisons
Comparing the Raspberry Pi AI Camera to the ArduCam PiNSIGHT camera isn’t as straightforward as you might expect. While these two cameras have many similarities, they are also quite different, and for any specific task, one might well be more suited than the other.
Specifications & Performance
This is one of the more challenging categories to compare, as the specifications for both cameras are minimal.
The Raspberry Pi AI Camera is based on the Sony IMX500 AI camera sensor, which Sony has not provided a TOPS rating.
The ArduCam PiNSIGHT AI Camera features a Luxonis OAK-SoM module rated at 4 TOPS.
In practice, both cameras detect people well, and neither can distinguish a multimeter from a phone! So, I have no obvious winner in this category, but I expect the winner may be the PiNSIGHT, as it has more hardware inside than the Raspberry Pi AI Camera.
Price & Availability
The Raspberry Pi AI Camera is priced at 70 US dollars and is available from authorized Raspberry Pi dealers, of which there are many to choose from.
Buying the ArduCam camera was a bit of an adventure! I could not find it at a local distributor, so I purchased it directly from ArduCam’s website. This turned out to be a costly option, as not only was the Camera more expensive than the Pi camera (99.99 US), but the shipping was also very expensive. While ArduCam listed “Express Shipping” (without any details as to who the shipper would be or how fast “express” was) at 15 dollars, the actual shipping cost was 43 US dollars. Add to that the customs fees of 45 Canadian dollars, and I almost paid the price of a Raspberry Pi camera to ship to me in Canada!
To be fair, I bought it a few months ago, and the price has gone down since then. I also note that Pi Hut now stocks them, so hopefully, ArduCam will soon get these out to more distributors.
But for now, the Raspberry Pi AI Camera is the clear winner in this category.
Software & Support
This is one of the most important categories. Does the manufacturer provide enough documentation to run and use the camera with your own applications? And do they update the supporting software periodically and add new examples?
Once again, the answer to either question is not that clear. That’s because the cameras are both based on other manufacturers’ products, and these products have their own documentation.
Raspberry Pi’s Wiki has documentation to help you get started with the Raspberry Pi AI Camara. They provide a few examples and explain how they work. The company also has a PDF with camera specifications stating that the camera will be supported until at least January 2028.
ArduCam also has documentation for the PiNSIGHT on their Wiki. They link to a few demos with a minimal but adequate explanation of their operation. There is no word as to how long the product will be supported.
But, once again, these cameras are based on other manufacturers’ products, and those manufacturers have quite a lot of documentation. Sony has a development platform and instructional videos for the IMX500. Luxonis also has an extensive list of resources for the OAK-SoM module.
Once again, this is a category with no obvious winner. However, Raspberry Pi does come out slightly ahead, with a defined EOL date and more detailed documentation on its Wiki than ArduCam. And the Sony IMX500 documentation and tools are more extensive than the Luxonis documentation.
Camera Features
By now, I’m sure you see a pattern: there are no real winners or losers. This trend continues when we look at the camera features, but this is also the category that will determine most people’s choices.
The Raspberry Pi AI Camera is quite similar in appearance to the standard Pi Camera. It will mount using the same hardware and is a direct substitute except for its depth. It connects with a CSI-2 cable, which is a flat, wide cable with some flexibility.
The Arducam PiNSIGHT is a self-contained camera with an ¼ inch mounting thread to be mounted on any standard photography tripod (or on a ¼-20 bolt). It interfaces to the Raspberry Pi board via USB, a much more flexible arrangement. However, the camera is much larger than the Raspberry Pi camera, which may limit its placement.
The Raspberry Pi AI Camera is a manual focus camera with a minimum focal range of 20 cm.
The ArduCam PiNSIGHT AI Camera is an autofocus camera with a minimum focal length of 15 cm.
In this impossible-to-decide category, I’ll give the ArduCam PiNSIGHT a bit of an advantage.
Conclusion
AI Cameras are essentially a “solution in search of a problem.” The complex code for detecting objects or identifying items has been condensed into one package, all done for you. Just pass it a trained model, and the AI Camera will be ready to go.
Want to build an intelligent alarm system capable of detecting and alerting you to the presence of humans (or goats)? How about monitoring your whole home with AI cameras, only lighting and air conditioning rooms that are actually occupied? Or perhaps you want to build a robot.
AI Cameras can make projects like these a lot easier to build. While they are certainly more expensive than “regular” cameras, they offer many advantages in performance and code simplification.
So invest some time (and money) into an AI Camera, and see what you can develop. I bet it will be awesome!
Parts List
Here are some components you might need to complete the experiments in this article. Please note that some of these links may be affiliate links, and the DroneBot Workshop may receive a commission on your purchases. This does not increase the cost to you and is a method of supporting this ad-free website.
Raspberry Pi AI Camera Raspberry Pi
ArduCam PiNSIGHT AI Camera ArduCam
Manfrotto Mini Tripod Amazon
Pi Camera Pan-Tilt Mount Amazon
Resources
Cheat Sheets – The two “cheat-sheet” text files warped into one easy-to-use ZIP file!
Article PDF – A PDF version of this article in a ZIP file.
Raspberry Pi AI Camera – Documentation – The Raspberry Pi documentation wiki.
Raspberry Pi AI Camera – Product Brief – Camera description and specifications.
Sony Edge AI Devices – Links to resources for IMX500-equipped cameras, including the Raspberry Pi AI Camera.
IMX500 Models – GitHub page with AI models for Sony IMX500
Sony IMX500 Packager User Manual – Instructions for creating Raspberry Pi AI Camera deployment packages.
ArduCam PiNSIGHT – PiNSIGHT instructions on ArduCam Wiki.
Luxonis Docs – Documentation for Luxonis products
Luxonis GitHub Page – More code for Luxonis video modules


Thank you for this very informative tutorial. Be well.