[stextbox id=”info” caption=”Coming up…”]1. Six-core processors for desktops
2. Server processors with eight, twelve and even more cores
3. Heterogeneous multi-core processors
4. Smaller, more powerful multi-core chips based on 32nm logic technology
5. Single-chip cloud computer
6. Scalable multi-core architectures
7. Higher-bandwidth I/O and communications, for improved performance of multi-core chips
8. Better parallel programming tools, model-based tera-scale applications and thread-aware execution environments, to make better use of multi-core hardware[/stextbox]
R. Ravichandran, director-sales, Intel South Asia, says, “Industry reports point out that there will be approximately 10-15 billion devices in the next 4-5 years on the Internet, and most devices like TVs, embedded devices and other consumer electronic devices (beyond traditional desktops, note books and servers) will have a touch to the Internet. Given that there will be a proliferation of devices in the computing continuum, with rich media and video as killer applications, some of the handhelds and smart phones will need great computing capabilities that are energy-efficient… and multi-core will also pervade these segments.”
Ravichandran cites the following examples to prove his point.
Safer and smarter roads. A number of traffic accidents are caused by worn down car tires. Intel, along with industry players Kontron and ProContour, has developed a tire-tread-monitoring embedded technology that is making roads smarter. Kontron, a member of the Intel Embedded and Communications Alliance, has developed a camera with an Intel Core2 Duo processor that captures tire-tread depth as a tire passes over a specialised grate. This technology can alert drivers when their tires need replacement to avoid potentially dangerous tire blow-outs.
Medical imaging with multiple cores. Physicians today collect more complex imagery of their patients than ever before. In order to accurately diagnose diseases and develop treatment strategies in a minimally-invasive manner, new imaging modes, methods and hardware are needed.
In collaboration with the Mayo Clinic, Intel has presented a paper titled Mapping High-Fidelity Volume Rendering for Medical Imaging to CPU, GPU and Many-Core Architectures, outlining how medical imaging benefits from parallel processing in the Intel micro-architecture code-named Nehalem. Medical volumetric imaging requires high-fidelity, high-performance rendering algorithms. They have now achieved more than one order of magnitude performance improvement on a number of large 3-D medical datasets.
What to expect?
Look into any device a few years down the line and you are sure to find a multi-core processor in it. Specialists in the embedded arena, including ARM and RIM, are all launching multi-core models of their processors too.
Multi-core processors have already widened their ambit from supercomputers to desktops, mobiles and data centres. However, the real success and sustainability of the multi-core concept depends on whether it will be ably supported on the software front too, with a proper understanding and execution of true parallel programming principles. Considering the efforts being made by industry leaders to train developers on parallel techniques, this hurdle will be overcome soon, and multi-cores will be adopted even more rapidly.
The author is a freelance writer based in Bengaluru. She writes on a variety of topics, her favourites being technology, cuisine, and life