• Aucun résultat trouvé

Virtual Memory Performance Implications

Dans le document Red Hat Linux 8.0 (Page 42-47)

II. Resource Management

4. Physical and Virtual Memory

4.5. Virtual Memory Performance Implications

While virtual memory makes it possible for computers to more easily handle larger and more complex applications, as with any powerful tool, it comes at a price. The price in this case is one of performance

— a virtual memory operating system has a lot more to do than an operating system that is not capable of virtual memory. This means that performance will never be as good with virtual memory than with the same application that is 100% memory-resident.

However, this is no reason to throw up one’s hands and give up. The benefits of virtual memory are too great to do that. And, with a bit of effort, good performance is possible. The thing that must be done is to look at the system resources that are impacted by heavy use of the virtual memory subsystem.

4.5.1. Worst Case Performance Scenario

For a moment, take what you have read in this chapter, and consider what system resources are used by extremely heavy page fault and swapping activity:

RAM — It stands to reason that available RAM will be low (otherwise there would be no need to page fault or swap).

Disk — While disk space would not be impacted, I/O bandwidth would be.

CPU — The CPU will be expending cycles doing the necessary processing to support memory management and setting up the necessary I/O operations for paging and swapping.

2. Under Red Hat Linux the system swap space is normally a dedicated swap partition, though swap files can also be configured and used.

Chapter 4. Physical and Virtual Memory 43

The interrelated nature of these loads makes it easy to see how resource shortages can lead to severe performance problems. All it takes is:

A system with too little RAM

Heavy page fault activity

A system running near its limit in terms of CPU or disk I/O

At this point, the system will be thrashing, with performance rapidly decreasing.

4.5.2. Best Case Performance Scenario

At best, system performance will present a minimal additional load to a well-configured system:

RAM — Sufficient RAM for all working sets with enough left over to handle any page faults3

Disk — Because of the limited page fault activity, disk I/O bandwidth would be minimally impacted

CPU — The majority of CPU cycles will be dedicated to actually running applications, instead of memory management

From this, the overall point to keep in mind is that the performance impact of virtual memory is minimal when it is used as little as possible. This means that the primary determinant of good virtual memory subsystem performance is having enough RAM.

Next in line (but much lower in relative importance) are sufficient disk I/O and CPU capacity. How-ever, these resources only help the system performance degrade more gracefully from have faulting and swapping; they do little to help the virtual memory subsystem performance (although they obvi-ously can play a major role in overall system performance).

3. A reasonably active system willalwaysexperience some page faults, if for no other reason than because a newly-launched application will experience page faults as it is brought into memory.

Chapter 5.

Managing Storage

If there is one thing that takes up the majority of a system administrator’s day, it would have to be storage management. It seems that disks are always running out of free space, becoming overloaded with too much I/O activity, or failing unexpectedly. Therefore, it is vital to have a solid working knowledge of disk storage in order to be a successful system administrator.

To start, let us see how disk devices are named under Red Hat Linux.

5.1. Device Naming Conventions

As with all Linux-like operating systems, Red Hat Linux uses device files to access all hardware (including disk drives). However, most of these operating systems use slightly different naming con-ventions to identify any attached storage devices. Here is how these device files are named under Red Hat Linux.

5.1.1. Device Files

Under Red Hat Linux, the device files for disk drives appear in the/dev/directory. The format for each file name depends on several aspects of the actual hardware, and how it has been configured.

Here are these aspects:

Device type

Unit

Partition

We will now explore each of these aspects in more detail.

5.1.1.1. Device Type

The first two letters of the device file name refer to the specific type of device. For disk drives, there are two device types that are most common:

sd— The device is SCSI-based

hd— The device is IDE-based

SCSI and IDE are two different industry standards that define methods for attaching devices to a computer system. The following sections briefly describe the characteristics of these two different connection technologies.

5.1.1.1.1. SCSI

Formally known as the Small Computer System Interface, the SCSI standard defines a bus along which multiple devices may be connected. A SCSI bus is a parallel bus, meaning that there is a single set of parallel wires that go from device to device. Because these wires are shared by all devices, it is necessary to have a way of uniquely identifying and communicating with an individual device. This is done by assigning each device on a SCSI bus a unique numeric address orSCSI ID.

Important

The number of devices that are supported on a SCSI bus depends on the width of the bus. Regular SCSI supports 8 uniquely-addressed devices, whilewide SCSIsupports 16. In either case, you must make sure that all devices are set to use a unique SCSI ID. Two devices sharing a single ID will cause problems that could lead to data corruption before it can be resolved.

One other thing to keep in mind is thateverydevice on the bus uses an ID.This includes the SCSI controller.Quite often system administrators forget this, and unwittingly set a device to use the same SCSI ID as the bus’s controller. This also means that, in practice, only 7 (or 15, for wide SCSI) devices may be present on a single bus, as each bus must include its own controller.

As technological advances have taken place, the SCSI standard has been amended to support them.

For instance, the number of wires that carried data along the bus went from 8 (known simply as SCSI) to 16 (known as wide SCSI). As it became possible to build faster hardware, and the speed at which data could be transferred increased, the bus speed increased from 5MB/sec to as much as 160MB/sec.

The different bus speeds are identified by adding words like "fast", "ultra", and "ultra-3" to the name of the SCSI environment being supported.

Because of SCSI’s bus-oriented architecture, it is necessary to properlyterminateboth ends of the bus.

Termination is accomplished by placing a load of the correct impedance on each conductor comprising the SCSI bus. Termination is an electrical requirement; without it, the various signals present on the bus would be reflected off the ends of the bus, garbling all communication.

Many (but not all) SCSI devices come with internal terminators that can be enabled or disabled using jumpers or switches. External terminators are also available.

5.1.1.1.2. IDE

IDE stands for Integrated Drive Electronics. A later version of the standard — known asEIDE(the extra "E" standing for "Enhanced") has been almost universally adopted in place of IDE. However, in normal conversation both are known as IDE. Like SCSI, IDE is an interface standard used to connect devices to computer systems. Like SCSI, IDE implements a bus topology.

However, there are differences between the two standards. The most important is that IDE cannot match SCSI’s expandability, with each IDE bus supporting only two devices (known as amasterand aslave).

5.1.1.2. Unit

Following the two-letter device type (sd, for example) are one or two letters denoting the specific unit. The unit designator starts with "a" for the first unit, "b" for the second, and so on. Therefore, the first hard drive on your system may appear ashdaorsda.

Tip

SCSI’s ability to address large numbers of devices necessitated the addition of a second unit charac-ter to support systems with more than 26 SCSI devices attached. Therefore, the first 26 SCSI hard drives would be namedsdathroughsdz, with the 27th namedsdaa, the 28th namedsdab, and so on through tosddx.

Dans le document Red Hat Linux 8.0 (Page 42-47)