Object classification and counting are essential for factory automation and devices where different objects are sorted or counted during packaging. Factories producing buttons, chocolates, and similar items require accurate counting for efficient packaging, which is a time-consuming and costly process when done manually.
Implementing an automation system to detect, classify, sort, and count objects can significantly streamline the production process. This project aims to create a cost-effective device for implementing such a factory automation system.
Object classifications are easily achievable using Edge Impulse models. You can distinguish between a man and an animal, a bicycle and other types of vehicles, and so on.
In our previous article, we did the same
Similarly, you can accurately count one type of object among other types.
Initially designed for MCU-level implementation on devices like ESP32 and Arduino Nicla Vision, the project was intended for counting a small area of 120 pixels x 120 pixels, with a relatively small-sized button as the object of interest.
However, it became apparent that even for this small area, the MCUs were inadequate, as the model file itself is approximately 8MB long. Consequently, the project was eventually installed on a Raspberry Pi computer, where it operates seamlessly. Refer to Fig. 1 for an illustration of the author’s working prototype.
The components required for the project are listed in the Bill of Materials table.
|Bill of materials
|Raspberry PI Zero W /4
|RPi camera module
|SD card 18Gb and above
Raspberry Pi and Camera Connection
Refer to Fig. 2 for the camera connection with the Raspberry Pi. Connect the Raspberry Pi camera and display as shown in the picture and connect the HDMI display to the HDMI port of the Raspberry Pi.
Also Check: Interesting Raspberry Pi Projects
Creating ML Model for Object Classification and Counting
To create the machine language (ML) model, you need to classify and recognize objects. Various platforms like TensorFlow, Edge Impulse, and Google Teachable can be used for this purpose.
Start by opening an account on edgeimpulse.com using an email ID. Collect a handful of similar types of buttons. If you access the site from a Raspberry Pi computer, use the camera to collect images of buttons from various angles, which is crucial for real-world deployment.
Edge Impulse also allows connecting your cell phone or laptop for input device convenience in the data acquisition phase. Refer to Fig. 3 for Edge Impulse device addition and data collection.
The Edge Impulse project is broadly divided into the following steps:
(a) Data Acquisition: This involves collecting various types of data such as images, sound, temperatures, distances, etc. Some of these data are separated as test data, while all others are used as training data.
(b) Impulse Design: This step is further subdivided into creating impulses, with sub-divisions for:
1. Input parameters: image [width, height], sound [sound parameter].
2. Processing block: How to process the input data
3. Learning block: [Object data of this model]
4. Image processing: Generate features of the collected images
5. Object detection: Select your neural network model and train the model
The object detection part requires expertise, or one could call it a trial-and-error effort, to achieve an accuracy level of 85% or above. There are several models to try, and anything above 90% is considered excellent.
However, it should not be 100% accurate, as this could indicate issues with the data. For this device, the accuracy achieved was 98.6%, which is commendable for a starter project, considering the limited dataset of around 40 instances. Refer to Fig. 3 for more details.