The 2D synaptic array can process motion within microseconds, improving driving safety detection and robotic grasping performance in autonomous systems.

Researchers from China in collaboration with international institutions have developed a brain inspired hardware system that accelerates machine vision processing by up to four times, potentially enabling autonomous vehicles and robotic systems to react faster than human drivers. The study introduces a hardware level motion detection architecture designed to reduce reaction delays that have long limited the safety of automated systems.
The system addresses a critical gap in autonomous driving, where vehicles traveling at 80 kilometres per hour can take roughly 0.5 seconds to respond to hazards, compared with the human brain’s average reaction time of 0.15 seconds. That delay can translate into an additional 13 metres of travel before braking. By accelerating visual data processing, the new hardware reduces reaction time by approximately 0.2 seconds in real world driving tests, cutting braking distance by about 4.4 metres at the same speed. Researchers reported a 213.5 per cent improvement in hazard detection performance in driving scenarios and a 740.9 per cent increase in robotic grasping capability under controlled conditions.
At the core of the development is a two dimensional synaptic transistor array that follows a filter then process model inspired by biological vision. Instead of analysing entire high definition frames, the chip detects image changes within 100 microseconds, isolates moving objects and forwards only relevant motion data to conventional computer vision algorithms. The device can retain motion information for more than 10,000 seconds and sustain over 8,000 operational cycles without degradation.
Gao Shuo, associate professor at Beihang University, says, “Our approach demonstrates a 400 per cent speed up, surpassing human level performance while maintaining or improving accuracy through temporal priors.”






