Support. The peripherals integrated on the processor are important as well, as these are key drivers of the overall bill of material (BOM) cost of the end product—higher the overlap between the peripherals required by the application and the peripherals integrated on the processor, the more optimal will be the BOM cost apart from the benefit of simplified board design complexity. “Given the fast-paced nature of today’s electronics market, one needs to also consider the software support (on an average, it is estimated that 70 per cent of the effort on typical embedded applications is on software development) and certifications available on the processor under consideration as this can significantly reduce effort, cost and cycle time—for example, if one is designing an industrial application with support of Profibus or EtherCAT protocols, it may be prudent to choose a processor that supports these protocols with due certifications. Last, with today’s need of smaller devices, in areas like healthcare/implantables and wearable devices, the physical/package size of the chip can also be a deciding factor,” explains Praveen Ganapathy, director (Business Development), Embedded Processing, Texas Instruments (India).
Business requirements. Ultimately, your design needs to be implemented as a successful product, ensuring that the product does not cost too much due to an unnecessarily beefed-up processor.
If you are planning for high-volume production, then it might be better to go for an inexpensive processor with additional engineering to implement functionalities through software solutions. In case of a lower target volume, it might be better to select a better (and more expensive) processor to implement on-chip functionality, so as to minimise engineering effort and design cycle time.
Bandwidth requirements. It is typical for product features to change during product development, i.e., the end-product features are very likely to be different from the initially conceived idea. “Depending on the market segment, the degree of variation may differ. For industrial, automotive and military/aerospace, the variations may be minor while it may be sizeable in the consumer segment,” explain Vijay Bharat S. and Sachidananda Karanth.
T. Anand gives us some tips to get started on this. “First apply a simple thumb rule; whatever is your estimate on data or memory requirement, select at least 25 per cent higher capacity processor. And more importantly, always keep some margin in memory, bandwidth, data space, etc so that future updates do not cripple the system.”
“The higher capacity thumb rule covers you from many small and distinct changes which most of the times result into touching or crossing the limits. Having additional capacity may cost a bit more but can save much more by avoiding catastrophic failure of system in the field or during last-minute critical changes,” adds Anand.
Vijay Bharat S. and Sachidananda Karanth share more. “A general strategy could be to plan for only 40-60 per cent of the bandwidth so as to allow for spikes and variations. The actual percentage could be tweaked after careful evaluation of the various aspects of product requirements during:
“Design. Calculate the theoretical value to get the maximum bandwidth supported by the system based on processor speed and data rate.
“Implementation. Validate the bandwidth with respect to software overheads and speed limitation.
“Testing. Validate actual bandwidth and ensure bandwidth calculation meets the requirements.”
“When choosing a processor, matching the processor and the embedded application is important. It is very significant to prevent system failures, especially for embedded applications. In a good processor, there are usually two processors running the same data. If there is any mismatch, an alert is triggered in the circuit. The second aspect to look out for is the interrupt latency, which can affect the real-time schedulability of the system,” explains Praveen.
Overall, one must keep in mind the processing bandwidth, memory bandwidth and memory sub-system interfaces. Depending on the application, you might require high-speed interfaces or processors, or even chip memory, etc.
Nilesh Ranpura, project manager, eInfoChips, shares some tips for us. “The first thing to be considered is the interface protocol meeting timings. This means, memory interfacing with processor needs to completely read and write cycles in efficient cycles. Write mem_read/write routine at low-level firmware which is tested on actual hardware.”
Shanmugasundaram adds that ultimately it all depends upon the application. “If an application is a real-time one, then bandwidth requirements are important along with prevention of system failures and data corruption. If the application is a non-real-time one, then the bandwidth requirements can take a backseat to avoid system failures and data corruption.”