• Aucun résultat trouvé

Interactive Operation

Chapter 8. Network File System (NFS)

8.1. How NFS Works

Currently, there are two major versions of NFS included in Red Hat Enterprise Linux. NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling than the previous NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access more than 2 GB of file data. NFSv4 works through firewalls and on the Internet, no longer requires an rpcbi nd service, supports ACLs, and utilizes stateful operations.

Red Hat Enterprise Linux 7 adds support for NFS version 4.1 (NFSv4.1), which provides a number of performance and security enhancements, including client-side support for Parallel NFS (pNFS).

NFSv4.1 no longer requires a separate TCP connection for callbacks, which allows an NFS server to grant delegations even when it cannot contact the client (for example, when NAT or a firewall

interferes). Additionally, NFSv4.1 now provides true exactly-once semantics (except for reboot operations), preventing a previous issue whereby certain operations could return an inaccurate result if a reply was lost and the operation was sent twice.

Red Hat Enterprise Linux 7 supports NFSv3, NFSv4.0, and NVSv4.1 clients. NFS clients attempt to mount using NFSv4.0 by default, and fall back to NFSv3 if the mount operation is not successful.

Note

NFS version 2 (NFSv2) is no longer supported by Red Hat.

All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with NFSv4 requiring it. NFSv3 can use the User Datagram Protocol (UDP) running over an IP network to provide a stateless network connection between the client and server.

When using NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol overhead than TCP. This can translate into better performance on very clean,

non-congested networks. However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. In addition, when a frame is lost with UDP, the entire RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these reasons, TCP is the preferred protocol when connecting to an NFS server.

The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbi nd ⁠ , l o ckd, and rpc. statd daemons. The rpc. mo untd daemon is still required on the NFS server to set up the exports, but is not involved in any over-the-wire operations.

[1]

⁠Chapt er 8 . Net work File Syst em (NFS)

Note

TCP is the default transport protocol for NFS version 2 and 3 under Red Hat Enterprise Linux.

UDP can be used for compatibility purposes as needed, but is not recommended for wide usage. NFSv4 requires TCP.

All the RPC/NFS daemons have a ' -p' command line option that can set the port, making firewall configuration easier.

After TCP wrappers grant access to the client, the NFS server refers to the /etc/expo rts

configuration file to determine whether the client is allowed to access any exported file systems. Once verified, all file and directory operations are available to the user.

Important

In order for NFS to work with a default installation of Red Hat Enterprise Linux with a firewall enabled, configure IPTables with the default TCP port 2049. Without proper IPTables configuration, NFS will not function properly.

The NFS initialization script and rpc. nfsd process now allow binding to any specified port during system start up. However, this can be error-prone if the port is unavailable, or if it conflicts with another daemon.

8.1.1. Required Services

Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers. RPC services under Red Hat Enterprise Linux 7 are controlled by the rpcbi nd service. To share or mount NFS file systems, the following services work together depending on which version of NFS is implemented:

Note

The po rtmap service was used to map RPC program numbers to IP address port number combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced by rpcbi nd in Red Hat Enterprise Linux 7 to enable IPv6 support. For more information about this change, refer to the following links:

TI-RPC / rpcbind support: http://nfsv4.bullopensource.org/doc/tirpc_rpcbind.php IPv6 support in NFS: http://nfsv4.bullopensource.org/doc/nfs_ipv6.php

n f s

systemctl start nfs. servi ce starts the NFS server and the appropriate RPC processes to service requests for shared NFS file systems.

n f slo ck

systemctl start nfsl o ck. servi ce activates a mandatory service that starts the

systemctl start nfsl o ck. servi ce activates a mandatory service that starts the appropriate RPC processes allowing NFS clients to lock files on the server.

rp cb in d

rpcbi nd accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them.

rpcbi nd responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4.

The following RPC processes facilitate NFS services:

rp c.mo u n t d

This process is used by an NFS server to process MO UNT requests from NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the rpc.mountd server replies with a Success status and provides the Fi l e-Hand l e for this NFS share back to the NFS client.

rp c.n f sd

rpc. nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the nfs service.

lo ckd

l o ckd is a kernel thread which runs on both clients and servers. It implements the Network Lock Manager (NLM) protocol, which allows NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted.

rp c.st at d

This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down. rpc. statd is started automatically by the nfsl o ck service, and does not require user configuration.

This is not used with NFSv4.

rp c.rq u o t ad

This process provides user quota information for remote users. rpc. rq uo tad is started automatically by the nfs service and does not require user configuration.

rp c.id map d

rpc. i d mapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (which are strings in the form of user@ domain) and local UIDs and GIDs.

For i d mapd to function with NFSv4, the /etc/i d mapd . co nf file must be configured. This service is required for use with NFSv4, although not when all hosts share the same DNS domain name. Refer to the knowledge base article

https://access.redhat.com/site/solutions/130783 when using a local domain.

8.2. pNFS

Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat

⁠Chapt er 8 . Net work File Syst em (NFS)

improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple servers concurrently. It supports three storage protocols or layouts:

files, objects, and blocks.

Note

The protocol allows for three possible pNFS layout types: files, objects, and blocks. While the Red Hat Enterprise Linux 6.4 client only supported the files layout type, Red Hat

Enterprise Linux 7 supports the files layout type, with objects and blocks layout types being included as a technology preview.

To enable this functionality, use the following mount option on mounts from a pNFS-enabled server:

-o v4 . 1

After the server is pNFS-enabled, the nfs_l ayo ut_nfsv4 1_fi l es kernel is automatically loaded on the first mount. The mount entry in the output should contain mi no rversi o n= 1. Use the

following command to verify the module was loaded:

$ l smo d | g rep nfs_l ayo ut_nfsv4 1_fi l es For more information on pNFS, refer to: http://www.pnfs.com.