• Aucun résultat trouvé

Kernel customization

Dans le document and System Administration (Page 155-162)

Host management

4.8 Kernel customization

The operating system kernel is that most important part of the system which drives the hardware of the machine and shares it between multiple processes.

If the kernel does not work well, the system as a whole will not work well. The main reason for making changes to the kernel is to fix bugs and to upgrade system software, such as support for new hardware; performance gains can also be achieved however, if one is patient. We shall return to the issue of performance again in section 8.11. Kernel configuration varies widely between operating systems. Some systems require kernel modification for every miniscule change, while others live quite happily with the same kernel unless major changes are made to the hardware of the host.

Many operating system kernels are monolithic, statically compiled programs which are specially built for each host, but static programs are inflexible and the current trend is to replace them with software configurable systems which can be manipulated without the need to recompile the kernel. System V Unix has blazed the trail of adaptable, configurable kernels, in its quest to build an oper-ating system which will scale from laptops to mainframes. It introduces kernel modules which can be loaded on demand. By loading parts of the kernel only when required, one reduces the size of the resident kernel memory image, which can save memory. This policy also makes upgrades of the different modules inde-pendent of the main kernel software, which makes patching and reconfiguration simpler. SVR4 Unix and its derivatives, like Solaris and Unixware, are testimony to the flexibility of SVR4.

Windows has also taken a modular view to kernel design. Configuration of the Windows kernel also does not require a recompilation, only the choice of a number of parameters, accessed through the system editor in the Performance Monitor, followed by a reboot. GNU/Linux switched from a static, monolithic kernel to a modular design quite quickly. The Linux kernel strikes a balance

between static compilation and modular loading. This balances the convenience of modules with the increased speed of having statically compiled code forever in memory. Typically, heavily used kernel modules are compiled in statically, while infrequently used modules are accessed on demand.

Solaris

Neither Solaris nor Windows require or permit kernel recompilation in order to make changes. In Solaris, for instance, one edits configuration files and reboots for an auto-reconfiguration. First we edit the file/etc/system to change kernel parameters, then reboot with the command

reboot -- -r

which reconfigures the system automatically. There is also a large number of system parameters which can be configured on the fly (at run time) using thendd command.

GNU/Linux

The Linux kernel is subject to more frequent revision than many other systems, owing to the pace of its development. It must be recompiled when new changes are to be included, or when an optimized kernel is required. Many GNU/Linux distributions are distributed with older kernels, while newer kernels offer signifi-cant performance gains, particularly in kernel-intensive applications like NFS, so there is a practical reason to upgrade the kernel.

The compilation of a new kernel is a straightforward but time-consuming, process. The standard published procedure for installing and configuring a new kernel is as follows. New kernel distributions are obtained from any mirror of the Linux kernel site [176]. First we back up the old kernel, unpack the kernel sources into the operating system’s files (see the note below) and alias the kernel revision to/usr/src/linux. Note that thebashshell is required for kernel compilation.

$ cp /boot/vmlinuz /boot/vmlinux.old

$ cd /usr/src

$ tar zxf /local/site/src/linux-2.2.9.tar.gz

$ ln -s linux-2.2.9 linux

There are often patches to be collected and applied to the sources. For each patch file:

$ zcat /local/site/src/patchX.gz | patch -p0

Then we make sure that we are building for the correct architecture (Linux now runs on several types of processor).

$ cd /usr/include

$ rm -rf asm linux scsi

$ ln -s /usr/src/linux/include/asm-i386 asm

$ ln -s /usr/src/linux/include/linux linux

$ ln -s /usr/src/linux/include/scsi scsi

Next we prepare the configuration:

$ cd /usr/src/linux

$ make mrproper

The command make config can now be used to set kernel parameters. More user-friendly windows-based programs make xconfig or make menuconfig are also available, though the former does require one to run X11 applications as root, which is a potential security faux pas. The customization procedure has defaults which one can fall back on. The choices areYto include an option statically in the kernel,Nto not include andMto include as module support. The capitalized option indicates the default. Although there are defaults, it is important to think carefully about the kind of hardware we are using. For instance, is SCSI support required?

One of the questions prompts us to specify the type of processor, for optimization:

Processor type (386, 486, Pentium, PPro) [386]

The default, in square brackets, is for generic 386, but Pentium machines will benefit from optimizations if we choose correctly. If we are compiling on hosts without CD-ROMs and tape drives, there is no need to include support for these, unless we plan to copy this compiled kernel to other hosts whichdohave these.

After completing the long configuration sequence, we build the kernel:

# make dep

# make clean

# make bzImage and move it into place:

# mv arch/i386/boot/zImage /boot/vmlinuz-2.2.9

# ln -s /boot/vmlinuz-2.2.9 /boot/vmlinuz

# make modules

# make modules-install

The last step allows us to keep track of which version is running, while still having the standard kernel name.

To alter kernel parameters on the fly, Linux uses a number of writable pseud-ofiles under/proc/sys, e.g.

echo 1 >/proc/sys/vm/overcommit_memory cat /proc/sys/vm/overcommit_memory

This can be used to tune values or switch features.

lilo and Grub

After copying a kernel loader into place, we have to update the boot blocks on the system disk so that a boot program can be located before there is an operating kernel which can interpret the filesystem. This applies to any operating system, e.g. SunOS has the installbootprogram. After installing a new kernel in GNU/Linux, we update the boot records on the system disk by running the

liloprogram. The new loader program is called by simply typinglilo. This reads a default configuration file/etc/lilo.confand writes loader data to the Master Boot Record (MBR). One can also write to the primary Linux partition, in case something should go wrong:

lilo -b /dev/hda1

so that we can still boot, even if another operating system should destroy the boot block. A new and superior boot loader called Grub is now gaining popularity in commercial Linux distributions.

Logistics of kernel customization

The standard procedure for installing a new kernel breaks a basic principle:

don’t mess with the operating system distribution, as this will just be overwritten by later upgrades. It also potentially breaks the principle of reproducibility: the choices and parameters which we choose for one host do not necessarily apply for others. It seems as though kernel configuration is doomed to lead us down the slippery path of making irreproducible, manual changes to every host.

We should always bear in mind that what we do for one host must usually be repeated for many others. If it were necessary to recompile and configure a new kernel on every host individually, it would simply never happen. It would be a project for eternity.

The situation with a kernel is not as bad as it seems, however. Although, in the case of GNU/Linux, we collect kernel upgrades from the net as though it were third party software, it is rightfully a part of the operating system. The kernel is maintained by the same source as the kernel in the distribution, i.e. we are not in danger of losing anything more serious than a configuration file if we upgrade later. However, reproducibility across hosts is a more serious concern. We do not want to repeat the job of kernel compilation on every single host. Ideally, we would like to compile once and then distribute to similar hosts. Kernels can be compiled, cloned and distributed to different hosts provided they have a common hardware base (this comes back to the principle of uniformity). Life is made easier if we can standardize kernels; in order to do this we must first have standardized hardware.

The modular design of newer kernels means that we also need to upgrade the modules in/lib/modulesto the receiving hosts. This is a logistic problem which requires some experimentation in order to find a viable solution for a local site.

These days it is not usually necessary to build custom kernels. The default kernels supplied with most OSs are good enough for most purposes. Performance enhancements are obtainable, however, particularly on busy servers. See section 8.11 for more hints.

Exercises

Self-test objectives

1. List the considerations needed in creating a server room.

2. How can static electricity cause problems for computers and printers?

3. What are the procedures for shutting down computers safely at your site?

4. How do startup and shutdown procedures differ between Unix and Windows?

5. What is the point of partitioning disk drives?

6. Can a disk partition exceed the size of a hard-disk?

7. How do different Unix-like operating systems refer to disk partitions?

8. How does Windows refer to disk partitions?

9. What is meant by ‘creating a new filesystem’ on a disk partition in Unix?

10. What is meant by formatting a disk in Unix and Windows (hint: they do not mean the same)?

11. What different filesystems are in use on Windows hosts? What are the pros and cons of each?

12. What is the rationale behind the principle of (data) Separation I?

13. How does object orientation, as a strategy, apply to system administration?

14. How is a new disk attached to a Unix-like host?

15. List the different ways to install an operating system on a new computer from a source.

16. What is meant by a thin client?

17. What is meant by a dual-homed host?

18. What is meant by host cloning? Explain how you would go about cloning a Unix-like and Windows host.

19. What is meant by a software package?

20. What is meant by free, open source and proprietary software? List some pros and cons of each of these.

21. Describe a checklist or strategy for familiarizing yourself with the layout of a new operating system file hierarchy.

22. Describe how to install Unix software from source files.

23. Describe how you would go about installing software provided on a CD-ROM or DVD.

24. What is meant by a shared library or DLL?

25. Explain the principle of limited privilege.

26. What is meant by kernel customization and when is it necessary?

Problems

1. If you have a PC to spare, install a GNU/Linux distribution, e.g. Debian, or a commercial distribution. Consider carefully how you will partition the disk.

Can you imagine repeating this procedure for one hundred hosts?

2. Install Windows (NT, 2000, XP etc). You will probably want to repeat the procedure several times to learn the pitfalls. Consider carefully how you will partition the disk. Can you imagine repeating this procedure for 100 hosts?

3. If space permits, install GNU/Linux and Windows together on the same host.

Think carefully, once again, about partitioning.

4. For both of the above installations, design a directory layout for local files.

Discuss how you will separate operating system files from locally installed files. What will be the effect of upgrading or reinstalling the operating system at a later time? How does partitioning of the disk help here?

5. Imagine the situation in which you install every independent software pack-age in a directory of its own. Write a script which builds and updates the PATH variable for users automatically, so that the software will be accessible from a command shell.

6. Describe what is meant by a URL or universal naming scheme for files.

Consider the location of software within a directory tree: some software packages compile the names of important files into software binaries. Explain why the use of a universal naming scheme guarantees that the software will always be able to find the files even when mounted on a different host, and conversely why cross mounting a directory under a different name on a different host is doomed to break the software.

7. Upgrade the kernel on your GNU/Linux installation. Collect the kernel from ref. [176].

8. Determine your Unix/Windows current patch level. Search the web for more recent patches. Which do you need? Is it always right to patch a system?

9. Comment on how your installation procedure could be duplicated if you had not one, but one hundred machines to install.

10. Make a checklist for standardizing hosts: what criteria should you use to ensure standardization? Give some thought to the matter of quality assur-ance. How can your checklist help here? We shall be returning to this issue in chapter 8.

11. Make a scaling checklist for your system policy.

12. Suppose your installed host is a mission-critical system. Estimate the time it would take you to get your host up and running again in case of complete failure. What strategy could you use to reduce the time the service was out of action?

13. Given the choice between compiling a critical piece of software yourself, or installing it as a software package from your vendor or operating system provider, which would you choose? Explain the issues surrounding this choice and the criteria you would use to make the decision.

Chapter 5

Dans le document and System Administration (Page 155-162)