• Aucun résultat trouvé

Experimental System

7.5 Overflow Control

Figure 7-23. Overflow control experiment.

If the buffering system is invoked when the message rate is high, messages quickly become batched which then helps the system escape from buffering.

A buffering system where buffer-insertion is performed in hardware can have a higher throughput in this batch-buffering mode than a system using software buffering because insertion cost is part of the minimum per-message overhead. Compared to a system with hardware buffering, An application on a system with two-case delivery must accept either a lower average message throughput or a rate of synchronization higher than the rate of buffering incidents to avoid runaway buffering. Runaway buffering is handled by overflow control. We examine overflow control next.

7.5 Overflow Control

The previous section shows that demand for buffering in the direct VNI is not an issue for well-behaved applications. There remains the case of ill-well-behaved applications, including applications under development and applications encountering unexpected conditions at runtime. This problem is unique to systems like the direct VNI that provide virtual buffering and guaranteed delivery.

Systems with limited buffering avoid the problem by performing flow control as part of the message protocol. Rather than adding flow control overhead to every message, the direct VNI approach is to provide “overflow control” at a much coarser level via scheduling. This section evaluates an implementation of one overflow control mechanism and policy.

We use a limited version of the overflow control algorithm described in Chapter 5. The paging system in Glaze/PhOS is incomplete; rather than introduce a paging system, we use overflow control with the “low water” mark, Nlow effectively set to infinitely. The resulting system is subject to deadlock, but is sufficient for our purposes because the test application does not deadlock. To briefly reiterate, the algorithm works as follows:

The buffer-insert handler of the virtual buffering system for an application compares the number of pages currently in use for buffering, Lqueue, to a threshold, Lthreshold. If the number of pages in any one process exceeds the the threshold, then the application globally switches into an overflow-control mode (Figure 7-23).

In overflow-control mode, the thread schedulers in each process of the application schedule only threads that tend to consume messages (i.e., the virtual buffering system’s cleanup thread).

The application remains in overflow-control mode until the number of pages in use falls back to zero. At that point, the application globally switches back to its normal mode.

The second network is used to communicate the transition from normal mode to overflow mode since the main network tends to be blocked at the time. The main network is used for all other

Process: 0 1 2 3 4 5 6 7

Threshold Pages

5 8 7 7 7 7 7 7 10

6 9 8 23 9 8 8 8 9

7 10 9 38 9 9 9 9 9

8 11 10 10 11 10 10 11 13

9 12 11 43 11 12 11 11 11

10 13 12 12 12 17 12 12 12

11 14 13 13 13 13 13 13 13

12 15 14 14 15 14 14 14 14

13 16 15 15 15 15 15 15 15

14 17 16 16 16 22 16 16 17

15 18 17 17 17 30 17 17 17

16 19 18 18 18 18 18 18 19

17 20 19 19 19 20 19 19 20

18 21 20 20 20 20 20 20 20

19 22 21 21 21 21 21 22 21

:::

1 495 463 478 469 483 486 483 797

Table 7-5. Overflow control limits the number of pages required to the threshold value with a few isolated exceptions. The table lists the maximum number of pages used over all processors and over the lifetime of the application with overflow control versus the control threshold. A threshold of1 corresponds to disabling overflow control.

communication. The implementation is limited in that the mechanism is cooperative in two ways.

First, the virtual buffering system and the thread scheduler are implemented in a user-level library that an application can contrive to avoid or change. Second, the mechanism makes assumptions about which threads send messages. In particular, we assume that message handlers do not send (many) messages.

We use a synthetic test application because, as shown in Section 7.4, our real applications do not incur excessive buffering. The test application runs alone, in parallel on all processors. Each process in the application runs a loop that sends messages to a neighboring process. The messages handlers are contrived so that they all induce buffering — they invoke DMA across a page boundary, a case the operating system currently handles by invoking buffering voluntarily, as described in Appendix A.

Overflow control keeps the number of pages required for buffering low. Table 7-5 tabulates the maximum number of pages required per process (Lqueue) over the lifetime of the application in each experimental run. Each run uses a different threshold,Lthreshold). The features of the table are as follows. First, with overflow control disabled (threshold of1), the number of pages required per processor is about 500. This number is a characteristic of the test application. Second, for the bulk of the table, the number of pages required is within about three of the high-water setting, showing pretty good control. The maximum number is higher than the threshold because the overflow control mechanism takes some time to throttle the influx of messages. During that time, more messages can be received. Finally, several data (indicated in boldface) show that occasionally even more pages

|

Figure 7-24. A plot of the average of the maximum number pages used over all processors over the lifetime of the application with overflow control versus the control threshold (eight processors).

are required. This effect is possible because the overflow control mechanism itself is implemented at user level, not in the kernel, and thus is subject to the slowdowns from page faults, unfortunate scheduling, etcetera. Any slowdown in the application of the overflow control mechanism leads to more pages in the buffer.

The result is that this overflow control mechanism shows good control over the number of pages required. Figure 7-24 shows how the number of pages required generally follows the threshold.

We conclude that overflow control is a promising approach to limiting buffer consumption without introducing protocol overhead into the fast case:

Conclusion 5: Physical buffer consumption can be controlled via nonintrusive overflow control.

Chapter 8