• Aucun résultat trouvé

Java threads and exceptions

Dans le document [ Team LiB ] (Page 135-142)

Figure 3.21. ISO reference model layers in a typical operating system

Chapter 4. Support for processes

4.18 Java threads and exceptions

4.19 Summary

In the previous chapters we have considered a static description of a software system as a number of modules and the functions of some of the major operating system modules. The concept of 'process' as an active element which causes the modules to be executed dynamically was introduced briefly. We now return to and develop this concept in order to satisfy the requirement, established in Chapter 1, that separate activities should be supported in systems. This support, as provided by operating systems, is now examined in detail.

We show that one function of an operating system is to create the abstraction of a set of concurrent processes. Having created this abstraction, processes may be used to execute both operating system and application modules.

[ Team LiB ]

[ Team LiB ]

Terminology

Recall that we are using the classical term process to indicate an active element in a system. Many current systems denote their practical unit of scheduling as a thread. Before Section 4.11 the term thread can be used interchangeably with process for the unit of scheduling. Later sections discuss the implementation of concurrent programs and need to distinguish between the unit of resource allocation (process) and the unit of scheduling (process or thread, depending on whether the system supports multi-threaded processes).

[ Team LiB ]

[ Team LiB ]

4.1 Use of processes in systems

The designers of early operating systems solved the problems of concurrency and

synchronization in an ad hoc way. It was always necessary to support synchronization between programs doing input or output and the corresponding devices, to take account of the great disparity in processor and device speeds. During the 1960s the concept of process came to be used explicitly in operating systems design, for example in Multics (Corbato and Vyssotsky, 1965), THE (Dijkstra, 1968) and RC4000 (Brinch Hansen, 1970).

One aspect of designing a system is to decide where processes should be used. A natural assignment is to allocate a process wherever there is a source of asynchronous events. In an operating system, for example, a process could be allocated to look after (synchronize with) each hardware device. If a user switches on a terminal and presses the escape key, an operating system process is waiting to respond. In an industrial control system a process could be allocated to each monitoring sensor and controlling actuator.

Another natural allocation of processes is to assign at least one process to each independent unit of work comprising a loaded program, data and library. Such a process will make system calls on the operating system to request service, for example, to request input, output, use of a file, etc.

Figure 4.1 shows two active processes assigned to execute the static modules of Figures 3.12 and 3.13. One executes the user program and makes library and system calls to do I/O; the other programs the device, taking requests from user programs and transferring data to or from the device. We have made an assumption in the figure: that a user process enters the operating system (with a change of privilege as discussed in Section 3.3.2) and executes the top level of the operating system. This is only one way in which the operating system might be designed (see Section 4.10).

Figure 4.1. Part of a device-handling subsystem showing processes.

We have assumed that processes exist in a system and have outlined how they might be used. We now focus on how they are supported and managed by the operating system. We assume that there is a process management module within the operating system which 'knows about' a number of processes, such as those handling devices and those executing users' programs.

[ Team LiB ]

[ Team LiB ]

4.2 Processes and processors

There are often far fewer processors than the processes we should like to use in a system. If this was not the case we could dedicate a processor permanently to each process. When the process has something to do, it executes on its processor; when it has nothing to do its processor idles. In practice, the operating system must perform the function of sharing the real processors among the processes. We shall see that this function can be regarded as creating virtual processors, one for each process; that is, the operating system is simulating one processor per process.

In Section 3.2.5 we saw that interrupts from devices may be given a priority ordering and that the handling of a low-priority interrupt is temporarily abandoned if a high-priority interrupt arrives, and is resumed after the high-priority interrupt has been handled.

This idea can be applied to processes as well as to the interrupt routines which are entered by a hardware mechanism. A user process may be busily inverting a large matrix when an interrupt arrives (and its service routine is executed) to say that a block of data that was requested from the disk is now in memory and the disk is free to accept further commands. The disk-handling process should run as soon as possible (to keep a heavily used resource busy), and then maybe the matrix program should resume or maybe the data from disk was awaited by some more important user process which should run in preference to the matrix program. We assume that the operating system process management function will implement a policy such as that outlined here.

Consider two processes in detail: Figure 4.2 is a time graph and shows two device handler processes (such as the one shown in Figure 4.1) sharing a single processor. It also shows when their respective devices are active and when the associated interrupt service routines (ISRs) run on the processor. We assume, in this example, that an ISR does not run as part of any process.

Figure 4.2. Time graph of two device handler processes, running on one processor.

Initially, process A runs, starts its device then gets to a point where it can do no more until the device completes its activity. It must be possible for the process to indicate this fact to the process management function, shown here as WAIT. When a process executes WAIT it changes its state from running to blocked. Process B is then run on the processor, starts its device then WAITs. If only A and B are available for running then the system becomes idle. In practice, there may be some other process that can be run. The next event shown in the graph is that A's device signals an interrupt which is taken by the processor and A's interrupt service routine is entered. This makes process A able to run again – its state changes from blocked to runnable and then to running when it is selected to run on the processor.

While process A is running, B's device signals an interrupt which is taken by the processor, interrupting process A. B's interrupt service routine is executed and finishes at time T, shown in the graph.

A policy decision must be made over what should happen at time T. In some operating systems, the policy is that process A must resume whatever the relative priorities of A and B are. The justification is that A has not voluntarily executed a WAIT and should be allowed to continue until it does so. This is called non-preemptive scheduling: processes only lose control of the processor on a voluntary basis.

The UNIX operating system schedules kernel processes (those executing the operating system) in this way. The advantage is simplicity – a process can be assumed to have tidied up its data structures, and not be hogging some valuable resource if it is allowed to finish its current task. The disadvantage is slow response to hardware events. Non-preemptive scheduling is useless if fast response to unpredictable events is required; for example, alarm signals or other events in hard real-time systems.

[ Team LiB ]

[ Team LiB ]

4.3 Process state

The discussion of Section 4.2 highlights that a process may be in a number of distinct states, illustrated in Figure 4.3:

running on a processor: RUNNING state;

able to run on a processor: RUNNABLE state;

unable to run on a processor because it is awaiting some event and cannot proceed until that event arrives: BLOCKED state.

Dans le document [ Team LiB ] (Page 135-142)