A Focus on Image Processing with Neural Networks

Saurabh Durgapal is working as technology journalist at EFY

6175
Advertisement
The Scream by Edvard Munch on the left, and the horrifying scream after exaggeration of patterns on the right
The Scream by Edvard Munch on the left, and the horrifying scream after exaggeration of patterns on the right

Lesser-known image analytics

Image analysis can also be done using many other techniques. These can be as simple as a binary tree with a simple true-false decision, or as complex as structured prediction.

Decision tree, for example, is used by Microsoft Structured Query Language (SQL) server. Never thought we would be using analytics in SQL, did we? But when you think about how data is fetched in SQL, it makes a lot more sense.

Decision tree is a predictive modelling approach used in statistics, data mining and machine learning. An analogy for easier understanding would be the binary tree. Some of the popular implementations are IBM SPSS Modeler, RapidMiner, Microsoft SQL Server, MATLAB and the programming language R, among others.

In R, decision tree uses a complexity parameter to measure trade-offs between model complexity and accuracy on training set. A smaller complexity parameter leads to a bigger tree, and vice versa. If you have a smaller tree, it means that the model did not capture underlying trends properly, and the tree needs to be re-examined.

Divide and conquer to analyse.

Cluster analysis, on the other hand, categorises objects in test data into different groups or clusters. Test data can be grouped into clusters based on any number of parameters, resulting in multiple algorithms in cluster analysis. Clusters are formed based on your algorithm, making it a multi-objective optimisation problem.

Formation of clusters, however, is not successful with the current data analysis needs. With the introduction of the Internet of Things (IoT), data that needs to be analysed has reached much higher volume. Many methods fail due to the curse of dimensionality, resulting in many parameters being left out while optimising the algorithm.

If you can remove some parameters in clustering, removing it altogether is not a stretch. Dimension reduction is the process of reducing the number of random variables being considered. The set of principal variables is then processed through feature selection and extraction. Data is classified by filters based on the features to be added or removed while building the model based on prediction errors.

Feature extraction then transforms this high-dimensional data into fewer dimensions through principal component analysis, among other techniques. Application areas include neuroscience and searching on live video streams, among others, as removal of multi-collinearity improves the learning system.

Odd man out algorithms.

Used mostly in data mining, anomaly detection is another interesting algorithm. Anomalies, also referred to as outliers, novelties, noise, deviations and exceptions, can be a fancier version of the odd man out. Typical anomalies include bank frauds, structural defects, medical problems or errors in text. Image analysis by Uncanny Vision is an example. The system analyses the images through a CCTV and points out anomalies, with basics going down to a person falling.

Proper hardware support is also necessary

Neural networks have existed since the early 1990s, but initially success was very limited due to the restricted availability of hardware and inadequate learning methods. “Last decade has been a welcome change in this regard,” says Dr Reger. He adds, “Advanced learning algorithms and massive 16-bit FP performance in modern hardware have turned neural networks into an effective technology for image analysis.”

Movidius Myriad 2, Eyeriss and Microsoft HoloLens are some vision microprocessors being used in today’s vision-processing systems. These may include direct interfaces to take data from cameras with a greater emphasis on on-chip dataflow. These are different than regular video-processing units as these are suited more to running machine vision algorithms such as CNNs and scale-invariant feature transform (SIFT). Some of the application areas for vision processing include robotics, the IoT, digital cameras for virtual reality and augmented reality, and smartcameras.

Fooling an AI

Reports released last year suggested that changing some pixels in a photo of an elephant could fool a neural net into thinking it is a car. AI systems have committed some disturbing tasks as well. April Taylor from iTech Post has explained in an article how her dogs were categorised as horses.

Twitter user, jackyalcine, has also reported some funny business with Google AI. This has lead to Google’s chief social architect, Yonatan Zunger, releasing an apology and attempts to rectify the error, resulting in the removal of the tag altogether. The explanation behind the error came to light once it was released for public.

The problem with Google AI was training using animal images. Training an AI is a process to determine a large set of parameters that calibrates a function that maps image data to content. Results may vary based on the images used in training. Some sample images tested by enthusiasts gave some very creepy results. Even a relatively simple neural network over-interpreted an image, resulting in trippy images.

Advertisement


SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here