• Aucun résultat trouvé

Software for Distributed Computing .1 Traditional Client-Server Model

Dans le document DISTRIBUTED NETWORK SYSTEMS (Page 35-39)

Table of Figures

CHAPTER 1 OVERVIEW OF DISTRIBUTED NETWORK SYSTEMS

1.4 Software for Distributed Computing .1 Traditional Client-Server Model

The client-server model has been a dominant model for distributed computing since the 1980s. The development of this model has been sparked by research and the development of operating systems that could support users working at their personal

computers connected by a local area network. The issue was how to access, use and share a resource located on another computer, e.g., a file or printer, in a transparent manner. In the 1980s computers were controlled by monolithic kernel based operating systems, where all services, including file and naming services, were part of that huge piece of software. In order to access a remote service, the whole operating system must be located and accessed within the computer providing the service. This implied a need to distinguish from a kernel based operating system --that part of software which only provides a desired service -- and embody it into a new software entity. This entity is called a server. Thus, each user process, a client, can access and use that server, subject to possessing access rights and a compatible interface. Therefore, the idea of the client-server model is to build at least that part of an operating system which is responsible for providing services to users as a set of cooperating server processes.

In the 1990s the client-server model has been used extensively to develop a huge number and variety of applications. The reasons are simple. It is a very clean model that adheres well to the software modularity and usability requirements. This model allows the programmer to develop, test and install applications very quickly. It simplifies maintenance and lowers its costs. It also allows proving correctness of code.

The client-server model has also influenced the building of new operating systems, in particular distributed operating systems [Goscinski and Zhou 1999]. A distributed operating system supports transparency. Thus, when users access their personal computers they have the feeling of being supported by a very powerful computer, which provides a variety of services. This means that all computers and their connection to a communication network are hidden.

However, it is not good enough to use the simple client-server model to describe various components and their activities of a Web-based client-server computing system. The Internet, and in particular Web browsers and further developments in Java programming, have expanded the client-server computing and systems. This is manifested by different forms of cooperation between remote computers and software components.

1.4.2 Web-Based Distributed Computing Models

The Internet and WWW have influenced distributed computing by the global coverage of the network, Web servers distribution and availability, and architecture of executing programs. To meet the requirements of quick development of the Internet, distributed computing may need to shift its environment from LAN to the Internet.

At the execution level, distributed computing/applications may rely on the following parts:

Processes: A typical computer operating system on a computer host can run several processes at once. A process is created by describing a sequence of steps in a programming language, compiling the program into an executable

form, and running the executable program in the operating system. While it is running, a process has access to the resources of the computer (CPU time, I/O device and communication ports) through the operating system. A process can be completely devoted to an application, or several applications can use a single process to perform tasks.

Threads: Every process has at least one thread of control. Some OS support the creation of multiple threads of control within a single process. Each thread in a process can run independently from the other threads. The threads may share some memory such as heap or stacks. Usually the threads need synchronization.

For example, one thread may monitor input from a socket connection, while another processes users’ requests or accesses the database.

Distributed Objects: Distributed Object technologies are best typified by the Object Management Group’s Common Object Request Broker Architecture (CORBA) [OMG 1998], and Microsoft’s Distributed Component Object Model (DCOM) [Microsoft 1998]. In these approaches, interaction control among components lies solely with the requesting object -- an explicit method call using a predefined interface specification to initiate service access. The service is provided by a remote object through a registry that finds the object, and then mediates the request and its response. Although Distributed Object models offer a powerful paradigm for creating networked applications composed of objects potentially written in different programming languages, hard-coded communication interactions make it difficult to reuse an object in a new application without bringing along all services on which it is dependent, and reworking the system to incorporate new services that were not initially foreseen is a complex task.

Agents: It is difficult to define this overused term, i.e., to differentiate it from a process or an (active) object and how it differs from a program. An agent has been loosely defined as a program that assists users and acts on their behalf.

This is called end-user perspective of software agents. In contrast to the software objects of object-oriented programming, from the perspective of end-to-end users, agents are active entities that obligate the following mandatory behavior rules:

R1: Work to meet designer’s specifications;

R2: Autonomous: has control over its own actions provided this does not violate R1.

R3: Reactive: senses changes in requirements and environment, being able to act according to those changes provided this does not violate R1.

An agent may possess any of the following orthogonal properties from the perspective of systems:

Communication: able to communicate with other agents.

Mobility: can travel from one host to another.

Reliability: able to tolerate a fault when one occurs.

Security: appear to be trustful to the end user.

Our definitions differ from others in which agents must execute according to design specifications.

According to the client-server model there are two processes, a client, which requests a service from another process, and a server, which is the service provider.

The server performs the requested service and sends back a response. This response could be a processing result, a confirmation of completion of the requested operation or even a notice about a failure of an operation. Following this, the current image of Web-based distributed computing can be called the Web-based client-server computing.

1.4.3 Web-based Client-Server Computing

We have categorized the Web-based client-server computing systems into four types: the proxy computing, code shipping, remote computing and agent-based computing models. The proxy computing (PC) model is typically used in Web-based scientific computing. According to this model a client sends data and programs to a server over the Web and requests the server to perform certain computing. The server receives the request, performs the computing using the programs and data supplied by the client and returns the result back to the client.

Typically, the server is a powerful high-performance computer or it has some special system programs (such as special mathematical and engineering libraries) that are necessary for the computation. The client is mainly used for interfacing with users.

The code shipping (CS) model is a popular Web-based client-server computing model. A typical example is the downloading and execution of Java applets on Web browsers, such as Netscape Communicator and Internet Explorer. According to this model, a client makes a request to a server, the server then ships the program (e.g., the Java applets) over the Web to the client and the client executes the program (possibly) using some local data. The server acts as the repository of programs and the client performs the computation and interfaces with users.

The remote computing (RC) model is typically used in Web-based scientific computing and database applications [Sandewall 1996]. According to this model, the client sends data over the Web to the server and the server performs the computing using programs residing in the server. After the completion of the computation, the server sends the result back to the client. Typically the server is a high-performance computing server equipped with the necessary computing programs and/or databases. The client is responsible for interfacing with users. The NetSolve system [Casanova and Dongarra 1997] uses this model.

The agent-based computing (AC) model is a three-tier model. According to this model, the client sends either data or data and programs over the Web to the agent.

The agent then processes the data using its own programs or using the received programs. After the completion of the processing, the agent will either send the result back to the client if the result is complete, or send the data/program/midium

result to the server for further processing. In the latter case, the server will perform the job and return the result back to the client directly (or via the agent). Nowadays, more and more Web-based applications have shifted to the AC model [Chang and Scott 1996] [Ciancarini et al 1996].

Dans le document DISTRIBUTED NETWORK SYSTEMS (Page 35-39)