Advertisement

For every embedded design, software considerations are very important. “However, the term embedded design is itself very broad, and so careful consideration has to be taken by the architect or system designer. The designer has to think on the amount of writing to be done. So, in field-deployed systems, we need to know what the system needs to do (application) versus what is demanded by the OS. Hence, there is a need to know what is written in storage,” explains Sambit Sengupta, demand creation manager, Avnet Electronics Marketing. RTOSes are always very application-optimised and in every embedded design where these are used, these use solid-state drives (SSDs). But in a custom OS, we have to be careful about bad-block management, and hence mostly, these use eMMC based memory.

Compatibility with platforms
Not all storage devices are compatible with all chipsets, field programmable gate arrays (FPGAs) or microcontroller families.

For example, the MT29F family of NAND flash by Micron is compatible with Freescale’s Kinetis K70 microcontroller units (MCUs), TI’s Sitara ARM9 processors and Xilinx’s Zynq-7000 all-programmable system-on-chips (SoCs), but not suited for Freescale’s Kinetis K50 MCUs and Xilinx’s Virtex FPGAs. Usually, storage solution vendors provide ready reckoners to check the compatibility of their product with different available platforms.

With higher data width comes faster data-transfer rate, provided there is data-line support available. This means that a memory with 32-bit data width will fetch more data in the same cycle as a memory with 16-bit data width, which will fetch more data than a memory with 8-bit data.

According to Guru Ganesan, managing director, ARM, “There is a lot of developer as well as application migration from 8- and 16-bit MCUs to 32-bit ones today.” “The chief reason being that typical modern-day or IoT type of applications require more memory (determined by the address bus size), as well as larger data buses for quicker and more efficient operation from a lay-persons standpoint,” he explains in an interview with EFY.

Evolving circumstances and the importance of supply
The first generation of engineers using NAND memory had to not only select the memory but also figure out how to manage it. “The NAND was manufactured on a thin small outline package (TSOP), and embedded design engineers had to write the code on how to manage read, write on this memory, as well as figure out error correction,” explains Vivek Tyagi.

Moreover, since the NAND characteristic would change from one NAND supplier to the next, if a certain requirement (technical or cost) meant changing your supplier, then the engineer had to rewrite the software, managing the memory after understanding the data sheets of the new supplier’s solution. Not many engineers looked forward to this.

With embedded multimedia card (eMMC) technology, the controller and NAND flash were combined together. Now, since the entire hardware belonged to the supplier, they also wrote the software to manage the memory, making it easier for the engineer to switch suppliers.

5CF_box4

Supply problems are not a rarity either. For instance, memory market research company DRAMeXchange predicted last year that NAND flash pricing will suffer because of a temporary shortage due to manufacturers switching to newer NAND production methods, as well as the release of next-generation iPhones and demand for already constrained LPDDR3 mobile DRAM.

Vivek adds, “Today most embedded NAND flash is made according to eMMC standard, whether it is for the memory being used for automotive solutions, point-of-sale devices or other embedded systems. This shift made it possible for vendors to make better decisions on selecting memory without worrying about rewriting the entire software.”

Bill O’Connell, key account manager, Swissbit, explained to us, in an interview at an industry event in Bengaluru, how it is very essential for the memory in medical and casino gaming electronics to be highly reliable, and those in strategic and industrial electronics to be additionally capable of running over a wider range of temperatures from -40°C to 85°C. Sourcing memory for these specifications could be tougher than those for the usual kind, so you might need specialised vendors to source these for you.

Costs can escalate quickly
At the end of the day, cost per unit of the memory device is almost always a major constraint for system architects in any choice of system components.

While some devices, like SSD and SRAM, exhibit large storage capacities, high speed and low-power consumption, their cost is very high. The designer may have to compromise a little on these factors and go for HDD or EEPROM, and DRAM instead. USBs and flash devices are typically cost-effective. But, flashes that come with added features, like wear levelling and bad-block management, are priced higher. The cost of remote storage depends on the number of devices sharing the memory.

ABE_Major

“Even though people keep saying that their memory is unlimited, most systems are still designed for optimal resources (with respect to the hardware and software), and for optimal size and form factor. They are interested in finishing it with the bare minimum, as they have to pay for each bit of extra memory,” explains Shinto Joseph, operations and sales director, LDRA Technology Pvt Ltd, in an interview with EFY. While this amount might be negligible when you are looking at just one or two chips, it does add up to a lot when you consider the quantities involved in mass production and shipping of a consumer product.

Even if we talk about an infrastructure system, it requires high-end reliable memory of the kind used in a critical system and every gigabyte (GB) costs a lot. This will also cause designers to put a cap on the memory that can be used here.

SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here