• Aucun résultat trouvé

Several computing paradigms .1 Utility computing.1Utility computing

Future of grids resources management

5.2 Several computing paradigms .1 Utility computing.1Utility computing

Utility computing was initialized in the 1960s, when John McCarthy coined the computer utility as:

“If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility . . . The computer utility could become the basis of a new and important industry.”

Generally, utility computing considers the computing and storage resources as a metered service, like water, electricity, gas and telephone utility [Yeo et al., 2006 ], [Paleologo, 2004 ], [Rappa, 2004 ]. The customers can use the utility services immediately whenever and wherever they need without paying for the initial cost of the devices. Utility computing is similar to virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. The back-end servers

Future of grids resources management 127 such as computer cluster and supercomputer are used to realize the virtual-ization [Broberg et al., 2008 ]. From the late 90s, utility computing turns re-surfaced. HP launched the utility data center to provide the IP billing-on-tap services [HP, 2004 ]. PolyServe Inc. offers a clustered file system based on commodity server and storage hardware that creates highly available utility computing environments for mission-critical applications and workload opti-mized solutions specifically tuned for bulk storage, high-performance comput-ing, vertical industries such as financial services, seismic analysis and content serving. Thanks to these utilities, including database and file service, cus-tomers can independently add servers or storage as needed.

5.2.2 Grid computing

Grid computing emerged in the mid 90s. Ian Foster et al. integrated distributed computing, object-oriented programming and web services to coin the grid computing infrastructure [Foster and Kesselman, 2004 ], [Foster et al., 2002 ]. From then on, a lot of researchers gave the notion of grid computing in various ways. Here, we choose the definition of R. Buyya presented at the 2002 grid planet conference:

“A grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed

‘autonomous’ resources dynamically at runtime depending on their availability, capability, performance, cost, and users’ quality-of-service requirements.”

This definition means that a grid is actually a cluster of networked, loosely coupled computers which works as a super and virtual mainframe to perform thousands of tasks. It also can divide the huge application job to several sub-jobs and make each run on large-scale machines. Generally speaking, grid computing goes through three different generations [Magoul`es et al., 2008 ]. The first generation was marked by an early metacomputing envi-ronment, such as Fafner and I-Way. The second generation was represented by the development of core grid technologies, grid resource management—e.g., Globus, Legion—resource brokers and schedulers—e.g., Ccondor, PBS—and grid portals—e.g., Grid Sphere. The third generation saw the convergence between grid computing and web services technologies—e.g., WSRF, OGSI.

It moved to a more service oriented approach that exposes the grid protocols using web service standards [Foster et al., 2001 ], [Shiers, 2009 ].

5.2.3 Autonomic computing

Autonomic computing was first proposed by IBM in 2001 with the following definition:

“Autonomic computing performs tasks that IT professionals choose to delegate to the technology according to policies, see

128 Fundamentals of Grid Computing

[Liu et al., 2005 ]. Adaptable policy–rather than hard-coded procedure–determines the types of decisions and actions that au-tonomic capabilities perform.”

Concerning the sharp increasing number of devices, the heterogeneous, dis-tributed computing systems are more and more difficult to anticipate, to de-sign and to maintain the complexity of interactions. The complexity of man-agement turns out to be a limiting factor of future development. Autonomic computing focuses on the self-management ability of the computer system. It will overcome the rapidly growing complexity of computing systems manage-ment and reduce the barrier that complexity poses to further growth.

In the area of multi-agent systems, several self-regulating frameworks are proposed, but most of these architectures are centralized which mainly re-duce management costs and seldom consider enabling complex software sys-tems and providing innovative services [Jin and Liu, 2004 ]. IBM defined the self-managing system, which can automatically process including configura-tion of the components (self-configuraconfigura-tion), automatic monitoring and control of the resources healing), monitoring and optimizing the resources (self-optimization) and proactive identification and protection from arbitrary at-tacks (self-protection), only with the input information of policies defined by humans. In other words, the autonomic system uses high-level rules to check and optimize its status and automatically adapt itself to changing conditions.

5.2.4 Cloud computing

The cloud computing emerges as a new computing paradigm to provide reliable, customized and quality of service guaranteed dynamic computing environments for end-users [Weiss, 2007 ]. It is often confused with several computing paradigms such as grid computing, utility computing and auto-nomic computing. According to the above description, we can draw the re-lationship among them. Utility computing cares that the packing computing resources can be used as a metered service on the basis of the user’s need. It is independent of the organization of the resources, both in the centralized and distributed system [Buyya et al., 2002 ]. But now, the companies prefer to bundle the resources of members to provide utility computing. Grid comput-ing is conceptually similar to the canonical definition of cloud computcomput-ing, but it doesn’t manage the economic entities as well as it is less scalable than cloud computing. Because of this massive scale, cloud computing must pay high at-tention on the interconnectivity management. In summary, cloud computing depends on grids, has autonomic characteristics and utilities bills which can be seen as a natural next step from the grid-utility model.

The computing paradigm varies with times. As shown in Figure 5.1, utility computing was discussed frequently between 2004 and 2005. As a popular term, grid computing is losing its appeal now. The term cloud emerged in 2007, and came to be the hot topic both in the research and industry domain.

Future of grids resources management 129 From the day it was born, cloud computing overpassed grid computing and became more and more popular. Heaps of industry projects have been started including Amazon elastic compute cloud, IBM’s blue cloud, and Microsoft’s Windows Azure. At the same time, HP, Intel Corporation and Yahoo! Inc.

recently announced the creation of a global, multi-data center devoted to open source cloud computing test bed for industry, research and education.

FIGURE 5.1: Google search trends for the last 5 years.

In order to analyze the reasons why cloud computing attracts so many researchers, we will firstly clarify the definition of cloud computing in the following section.

5.3 Definition of cloud computing