Digital Twin + Predictive Maintenance Using AI and ML

Ashwini Kumar Sinha

666
 

Heavy motors, engines and machines are crucial for operating factories, vehicles and various other equipment. Their efficient working is highly essential to prevent any technical interruption. For instance, if a heavy motor in a yacht or a ship suddenly stops working, then the transport will be stuck in the middle of the ocean, putting in danger many lives on board. However, early fault detection can prevent such a mishap from occurring.

When machines like motors start getting small flats, their noise and power consumption rises accompanied by varying vibrations. By knowing about such micro-changes, we can predict the chances for failures in different machines, thereby repairing, maintaining and protecting them before any other fault occurs.

So to address the above issue, we will develop an ML model that monitors the sounds and vibrations produced by a machine, detects minute changes and predicts the fault for predictive maintenance. Within time, the ML model will learn the sound differences produced by a  machine in normal and faulty conditions. By processing the outcome, it will deduce whether the change indicates any malfunctioning. If yes, then an alert will be sent in advance so that necessary steps are taken to prevent any damage.

Bill Of Material 

Note: You can replace the voice bonnet with any Bluetooth or USB mic.

ML Model Training

You can choose from different platforms like TensorFlow, SensiML, Edge Impulse etc. for creating an ML model. Here, I am using Edge Impulse for training the ML model. To learn how to create an ML model and train it, refer to the following article: X-Ray Based Covid detection

Train the ML model to detect any minute changes in sound and vibration using audio signals. For this, install the EDGE Impulse dependencies on Raspberry Pi (refer to the above article to know the installation procedure). 

Run the following commands to install the dependencies :-

curl -sL https://deb.nodesource.com/setup_12.x | sudo bash –

sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps

npm config set user root && sudo npm install edge-impulse-linux -g –unsafe-perm

Next, create a new project and connect the Raspberry Pi project with EDGE Impulse using 

Edge-impulse-linux

I have named this project ‘Digital Twin and IoT Predictive Maintenance’. You can, however, give it any other name of your choice.

Now open the URL that has been received in the Linux terminal or web browser. You will then see an option to upload the data for training the ML model. Select the ‘microphone’ option and upload the audio of the motor. Label it as ‘normal’ or ‘defective’, whichever is appropriate. Under the label ‘normal’, I uploaded the sound of a functional motor. For the label ‘defect’, I uploaded the audio of a motor that was not properly oiled and ran in an overloaded condition. The audio was recorded for the motor when it experienced low and high voltages and in shaft suck condition.

Next, select the parameter, processing and learning block for the ML model to do audio classification and processing. I have chosen Spectrogram for audio processing and the neural network Keras for ML learning.

In Spectrogram, select those parts from the collected audio data, which you want to set as parameters for the defective machine sound.

Next, go to the NN classifier and add layers for parameters to train the ML model.

Fig
Fig
Fig
Fig
Fig

Deploying ML Model 

After being satisfied with the accuracy rate of the ML model (I got an accuracy rate of around 90%), you can then think of deploying it, for which an appropriate microcontroller has to be selected as per the place of deployment.

Select a Linux board, open the Linux terminal on Raspberry and then download the ML model file by running 

edge-impulse-linux-runner –download modelfile.eim 

in the terminal. 

vb
vn

Coding 

Before you begin coding, install the dependencies and Edge Impulse library on the Raspberry Pi. Also, ensure to clone the Edge Impulse SDK.

To do the above, run the following in the terminal 

sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev

pip3 install edge_impulse_linux -i https://pypi.python.org/simple

pip3 install edge_impulse_linux

git clone https://github.com/edgeimpulse/linux-sdk-python

Now, open the cloned Python SDK folder of Edge Impulse. Go to examples, open the audio folder and create a new Python script file with a name of your choice (Here I have named it as DigitalTwinpredict.py).

Next, open the classify.py Python file (present in the same SDK folder) and copy the code of classify.py in DigitalTwinpredict.py (different file name in your case). Now import here the AIY pins and other functions for using the AIY voice bonnet.

 After that, use the if condition to set the score for the sound labels, that is, normal or defective. If the detected score is greater than 70% when the ML model processes the sound, then set a function in the code, which sends an alert using LED lighting and an alarm stating that the motor is getting closer towards a fault.

fxgf
vn

 

Testing 

After this, copy the model file in the .eim file to the folder where the code is. Then open the terminal and run the code in Python with the path to the .eim ML model file. Now listen to the audio sample. It might ask you for the audio port number if you have multiple audio input devices. For such cases, select as system default and then run the motor sound, which might have some changes indicating a possible fault.

The model that I created processes various sounds of a motor running in a normal condition, with no lubricant and friction shaft, when it’s overloaded etc. So whenever the ML detects a sound that is similar to the sound produced by an overloaded motor or if a change in sound occurs when the shaft experiences any friction, an alert gets issued stating that a repair might be needed in the future.

gcgh
bvnbv

Download ML Model Datasets & Code

SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here