The Power Of Grid Computing

Deepak Halan

4201
Advertisement

The concept of tapping unused CPU cycles was born in the early 1970s. This was the time when computers were first getting linked together using network engineering. In 1973, Xerox’s Palo Alto Research Center (PARC) installed the first Ethernet network, and the first full-fledged distributed computing effort was underway. Scientists John F. Shoch and Jon A. Hupp developed a worm (as they named it) and imagined it moving from machine to machine using idle resources for doing something resourceful. Richard Crandall, an eminent scientist at Apple, started putting idle, networked NeXT computers to work. He installed software that enabled the machines to perform computations and to combine efforts with other machines on the network, when not in use.

The term grid computing came up much later in early 1990s as a metaphor for making computer power as easy to access as an electric power grid. CPU scavenging and volunteer computing became quite popular in 1997 courtesy distributed.net. Two years later, SETI home project was established to tap the power of networked PCs worldwide, with the objective of solving CPU-intensive research problems.

Ian Foster, Carl Kesselman and Steve Tuecke are widely regarded as the fathers of the grid, as they brought together the ideas of the grid (including those from distributed computing, object-oriented programming and Web services).

155
Ian Foster, Carl Kesselman and Steve Tuecke are regarded as the fathers of the grid (Image courtesy: http://www.calit2.net)

They pioneered the creation of Globus toolkit that incorporates not only computation management but also storage management, security provisioning, data movement, monitoring and a toolkit. This was used for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services and information aggregation.

Globus Toolkit remains the de facto standard for building grid solutions. In 2007, the term Cloud computing became a buzz word and is conceptually similar to the canonical foster definition of grid computing.

What grid computing is all about

Grid computing is a type of distributed computing that comprises coordinating and sharing computing, application, data, storage or network resources across dynamic and geographically spread out firms. Basically, it is a computer network in which each computer’s resources are shared with every other computer in the system, that is, processing power, memory and data storage are all unrestricted resources that authorised users can access and control for certain projects.

A grid computing system can be elementary and homogenous such as a pool of similar computers running on the same operating system (OS), or complex and heterogeneous such as inter-networked systems consisting of nearly every computer platform that exists.

Grid computing started as a response to scientific users’ need to combine huge amounts of computing power to run very complex applications. The ad hoc assemblages of distributed resources were coordinated by software that mediated the various OSes and managed aspects like scheduling and security to create sophisticated, virtual computers.

Grid computing remains confined more to the research community and is a sign of utility-style data processing services made feasible by the Internet. Peer-to-peer computing that enables unrelated users to dedicate portions of their computers to cooperative processing via the Internet is a related phenomenon used mostly by consumers and businesses. This harnesses a potentially large quantity of computing power in the form of excess, spare or dedicated system resources from the complete range of computers spread out across the Internet.

Grid-related technologies can change the way organisations deal with multifaceted computational problems. Many grids are constructed by using clusters or traditional parallel systems as their nodes. For example, World Wide Grid, used in evaluating Gridbus technologies and applications, has many nodes that are clusters.

Cluster computing is made up of multiple interconnected and independent nodes that cooperatively work together as a single unified resource and, unlike grids, cluster resources are owned by a single organisation, and are managed by a centralised resource management and scheduling system that manages allocation of resources to application jobs.

Organisations that are computational power dependent to further their business objectives often have to drop or scale back new projects, design ideas or innovations, simply due to lack of computational bandwidth. Frequently, project demand exceeds computational power supply, despite considerable investments in dedicated computing resources. Upgrading and purchasing new hardware is an expensive option and could soon run into obsolescence, given the rapidly-changing technology.

Grid computing is a better solution given the better utilisation and distribution of available IT resources. It can be used in executing science, engineering, industrial and commercial applications such as data mining, financial modelling, drug designing, automobile designing, crash simulation, aerospace modelling, high-energy physics, astrophysics, Earth modelling and so on.

Advertisement


SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here