Right from the smartphone in your hand to the dashboard of your car, the embedded media processor is working continuously to deliver what you see and what you hear. It needs to concurrently perform a number of tasks; collect inputs, process audio and video, synchronise the two, deal with graphics, perform a lot of format conversions and cognitive analysis, and finally output to the panel or display.
While all you probably need is a universal serial bus 3.0 to reap all its benefits, content that goes into these media processors is, to say the least, vast. “As the need for bandwidth coupled with data sharing and access-ease requirement increases, interface architectures have evolved substantially over the years to be able to create solutions for the growing Internet of Things segment,” quotes Nate Srinath, founder-director, Inxee Systems Pvt Ltd. This article tries to understand what goes into the making of these processors and how these translate to what we hear or see on the screen.
More functionality at lesser cost
“In today’s media processors, we have about 250 different intellectual property blocks, and this number can go up to 400 or 500 blocks soon,” says Manuel Rei, industry director, Dassault Systemes. “Media processors should be able to manage different kinds of signals and from the same platform, as the cost would sky rocket otherwise,” he adds.
Ashok Chandak, senior director, global sales and marketing, NXP Semiconductors India Pvt Ltd, predicts that the ones that would be popular are those that can provide most capabilities inside a single device. “To keep up with this, chips that go into media processors have different dies that are stuck together one on top of the other. Each has a specific function, thus enabling different capabilities and functionalities within a single package,” he says.
Three-dimensional integrated chips that incorporate system on chip and system in package will be the key enablers for performance and small form factor products, feels Srinath.
Bill Giovino, embedded systems specialist at Mouser Electronics, elaborates, “The movement has been towards higher integration, putting features like video, audio and touch-screen control on one chip. Video processors are supporting higher pixel resolutions, with speed improvements that eliminate the need for the cost and board space taken up by frame buffer dynamic random-access memory (DRAM).”
Support for higher resolution. Vijay Bharat S., associate vice president – hardware design, Mistral Solutions Pvt Ltd, quips in, “Built-in radios for communication, high frame-rate video capture and display, greater than 4K resolution videos and a number of processing cores are on everyone’s mind. High-definition video resolutions start from 1080p and go up to 3Gbps of high-definition serial digital interface.”
He suggests that there are various techniques available to improve audio-visual quality. Once we convert all audio-visual signals from analogue to digital form, quality will improve and noise on the signal can be easily removed. “For audio signals, in particular, more types of digital filtering and sound synthesis is being implemented on-chip,” according to Giovino.