It's possible (even likely) that the first time you try to share and mount an NFS shared directory, it will fail. The following section takes you through some problems and issues that can arise as you try to use the NFS facility.
To make sure that you have properly exported an NFS shared directory, you need to be able to see if any errors occurred when the export was done. Unfortunately, the NFS Server Configuration window will not always tell you when it creates an exported NFS directory entry that doesn't work.
If you launch the NFS Server Configuration window from a Terminal window, you will be able to see error messages that occur. As you add or modify shared directories in the NFS Server Configuration window, error messages and informational messages will appear in the Terminal window from which you launched NFS Server Configuration window. Here are some examples:
# redhat-config-nfs exportfs: /etc/exports: 3: bad anonuid "anonuid=chris" exportfs: toy has non-inet addr exportfs: toy has non inet addr
In this example, after I launched the NFS Server Configuration window, I added a shared directory. Although the action successfully saved the entry to /etc/exports (and didn't complain), error messages that showed up in the Terminal window where I launched the window show a few errors. First, I entered a user name to be used as the anonymous user and the option requires a UID number (such as 505). Second, the name of the computer I was allowing to share the directory (toy) failed to have its address resolved (my DNS server couldn't find toy and it was not listed in my /etc/hosts file).
Because the NFS Server Configuration window mostly writes to the /etc/exports file and runs the exportfs command, you can do those things by hand and find out if there were any problems. For example:
# exportfs -v -a exportfs: /etc/exports: 3: bad anonuid "anonuid=chris" exporting *:/usr/local/share/television exporting *:/root/packages/FC2
Here you can see that running the exportfs command in verbose mode (-v) displays the failures and successes exporting the shared NFS directories in the /etc/exports file. The first attempt to export my /home/chris directory failed because I entered the anonymous user I wanted to use as the user name instead of the UID number. The other two exports in the file (/usr/local/share/television and /root/packages/FC2) both succeeded, as indicated by the exporting lines. You can verify that by typing the following:
# showmount -e /usr/local/share/television * /root/packages/FC2 *
Here you can see the two directories that have been successfully exported and that they are available to any host computer (*).
You try to unmount a remote NFS directory (using the umount command) and the unmount fails with a "device is busy" message such as the following:
# umount /mnt/whatever umount: /mnt/whatever: device is busy
Most likely, there is a process holding the directory open. You can type the lsof command, along with the directory name, to see if any processes are currently accessing the shared directory:
[root@shuttle one]# lsof /mnt/whatever COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME lsof 3893 root cwd DIR 0,12 4096 1079267 /mnt/whatever (duck:/tmp) lsof 3894 root cwd DIR 0,12 4096 1079267 /mnt/whatever (duck:/tmp) bash 31558 root cwd DIR 0,12 4096 1079267 /mnt/whatever (duck:/tmp)
Here you can see that the bash shell is using /mnt/whatever as the current working directory. Since I ran the lsof command from that same shell, that command is also showing /mnt/whatever. In this case, I can simply change to a different directory, and then try the umount command again. In some cases, you might need to kill a process that is holding open the device before the directory can be unmounted.
Often, instead of running the lsof command, I will just look for the shell I have opened to the shared directory after an unmount fails.
The most common reasons for failures when you try to mount an NFS share are:
Wrong share information
Firewalls are blocking NFS ports
Mount failures can also occur if a mount request comes from a client that is making the request from an insecure port. The following sections describe each of these issues.
So as not to give away information about NFS shares from inquiring clients, the same failure message from an NFS server when you try to mount an NFS share can result from any of several different problems. The basic failed NFS mount request looks like the following:
# mount -t nfs toy:/home/chris /mnt/toy mount: toy:/home/chris failed, reason given by server: Permission denied
This message indicates that you did contact the server, but you are not requesting a shared directory that is available for you to mount. Reasons that the mount might have failed include:
Directory is not being exported-The directory may exist, but it is not currently being exported from the server. Check with the server that the directory is being shared from the server. If it is, make sure that you typed the name of the directory correctly.
You don't have proper permission-The NFS server may have exported the directory, but you may not be on the list of clients.
In either case, the error message indicated that the NFS service is running on the server, so you should contact the system administrator of the NFS server to make sure you have the correct name of the share and the rights to mount it. If the NFS service had not been running on the server, you would have seen a message like the following when the mount failed:
mount: RPC: Program not registered
A restrictive firewall on the server might mistakenly be blocking NFS clients from mounting your NFS shares. If you are seeing RPC failures from NFS clients trying to mount NFS shares, it may be because the firewall rules (iptables) are not allowing requests to RPC and/or NFS ports.
To remedy this problem, be sure to open access, using the iptables command, to ports 111 (RPC) and 2049 (NFS) on the server. This should allow access from the NFS client systems.
You try to mount an NFS directory from your Linux computer on an iMAC and it fails. The problem might be that some versions of MAC OS X make the request to mount your NFS shared directory from a port higher than port 1024. By default, NFS in Fedora and Red Hat Linux will refuse to honor NFS mount requests that originate from ports above 1024.
You can get around this problem by adding the insecure option to your exported file system (in the /etc/exports file) when you export the file system from the server. This will allow requests for NFS mounts (which come from insecure ports by default on MAC OS X systems) to succeed.
Here are a few tips for those times when you can't access files and directories as you might expect:
Can't access NFS directory as root user-By default, when someone shares a directory using NFS, the root user is always mapped into the anonymous user (which is the nfsnobody user and group with a 65534 UID and GID). This is true even if all other users are mapped into their same UID and GID numbers on the server. If you need to access a remote file system as the root user, the server must share that directory with the no_root_squash option added to the /etc/exports definition for the shared directory.
Can't write files to the remote NFS directory-Here's a quick checklist of reasons why you might not be able to write to a remote NFS directory from an NFS client computer:
Shared directory was shared read-only-Even if you mounted an NFS directory as read/write, you won't be able to write to it if it was shared read-only.
Shared directory was mounted read-only-A shared NFS directory that is available with read/write permissions still has to be mounted with read/write permission to allow writing.
File and directory permissions prevent you from changing or creating files-By default, you will be able to create or change only those files and directories that are owned by the anonymous NFS user (nfsnobody by default) or that are open for writing to everyone. If you need to create and change other files and directories on the shared directory, the NFS server needs to specifically allow you to do that. (See the NFS User Permissions section.)
File locking requests fail from client-Some older NFS clients may not be able, by default, to successfully request file locking on a shared NFS directory. The reason is that the NFS client may not be passing credential information about the user to the NFS server because the feature wasn't implemented in the client. To get around this problem, the server can add the insecure_locks option to the definition of the share in the /etc/exports file.
Changing mount and export settings can have some effect on how NFS directories perform when they are shared with client computers:
Syncing writes to disk-By default, an exported NFS directory is shared with the sync option on, causing writes to be committed to hard disk before an NFS request is completed. Setting the async option for the shared directory (in /etc/exports) can improve performance of writes to that shared directory, although it can pose some risk to data loss if the server crashes before the data moves from cache to hard disk. Not only can this result in corrupted data, but you may not know the data was corrupted until a later time when you return to use that data.
Delaying writes to disk-With the sync option on, the wdelay option is also on by default, causing the disk to save several write requests before committing them to disk to improve performance. Because this can actually hurt performance on systems where many small, unrelated writes are occurring, you can disable the wdelay option by setting the no_wdelay option for the shared directory (in the /etc/exports file).
Improving general disk performance-There are many standard disk performance techniques that will help the performance of your NFS server. See Chapter 9 for information on using the hdparm command, as well as other utilities, to tune your hard disk.
Changing read and write sizes-On the client side, there are options you can add to the mount command (or the /etc/fstabfile) when you mount the NFS share directory that can improve performance. The nfs man page recommends rsize= (the number of bytes for each data read) and wsize= (the number of bytes of each data write) be increased from the default 1024 to 8192 in both cases. The larger data reads and writes can be of particular value for transactions that transfer large blocks of data on networks that experience few data collisions.
Requests to an NFS server are timing out-Whether from a slow network, slow server, or multiple subnet hops to get to the server, it's possible for operations to time out to an NFS server that is actually available. If a remote NFS share is mounted with the soft option, if no response occurs after a time-out period, the request fails. With the hard option (default), read and write requests will wait forever for the server to become available. After the time-out period with a hard-mounted NFS share, the time-out is logged, but NFS continues to make the request forever.
Here are a few options you can adjust to prevent time-outs:
If you are getting slow response from an NFS server, increasing the RPC time-out value might improve overall performance. The timeo= option to the mount command lets you change the value of the first RPC time-out from the default 7-tenths of a second (timeo=7) to a larger number (in tenths of seconds). If the RPC request fails to get a response within the first timeo value, the time-out is doubled and the transmission is repeated until the maximum of 60 seconds is reached.
If the read or write requests themselves are timing out, you can increase the number of minor time-outs that occur before a major time-out occurs and the request is abandoned. By default, the value of retrans= is set to 3. This causes the operation to fail after three minor time-outs when the directory is soft mounted.
Skip subtree checking-You can gain some performance improvements with NFS by not verifying that a requested file is actually in the shared directory. Instead, you can have NFS only check that the file is in the correct file system. You can tell NFS to skip subtree checking by adding the no_subtree_check option to the /etc/exports file for the shared directory.
Because a file system may exhibit different behavior when it is mounted locally than when it is mounted remotely, some unexpected behavior can occur. Here are some examples of NFS behavior that you might not expect:
Can't see contents of a subdirectory-You mount a remote NFS directory and change to a subdirectory, only to find that the subdirectory appears to be empty. You go to the server and see that the directory is full of files that are world readable, but you still can't see them from the client. It may be that the subdirectory is on a separate partition on the server. For example, if you share the root directory (/) of a server that has separate /boot and /home directories, an ls of the boot and home directories from the client will show those directories on the server as empty.
To make it so you can traverse to the /boot and /home directories of the shared root directory just described, you have to do two things. First, you must add the nohide option to the export definition of the NFS-shared root files system. Second, you must explicitly share the /boot and /home directories as well in the /etc/exports file. For example:
/ *(ro,nohide) /home *(ro) /boot *(ro)
Caution |
Take care when using the nohide option. Apparently multiple partitions on the same NFS-mounted directory structure are at some risk of having conflicting inode numbers, which can result in NFS becoming confused. |
NFS requests hanging-Here's where you have to consider how critical it is that your data stay in sync between your NFS client and server. If, for example, the client is performing a write operation and the server goes down or otherwise become inaccessible on the network, by default (using the hard mount option) the client process will hang until the write is completed. You will not be able to break out of that request without killing the process doing the write.
As an alternative, you can specify the soft option when you mount the directory (to allow the operation to time out eventually) or specify the intr option. (You can add these options when you mount the NFS directory, either from the mount command line or in the /etc/fstab file.) With the intr option set, a Ctrl+C on the command line should let you break out of a command that is waiting for an NFS resource to come back.
NFS mounts hanging on boot-If you are automatically mounting an NFS at boot time (typically from the /etc/fstab file) and the NFS shared directory is temporarily unavailable, by default the boot process will not continue until that NFS directory becomes available.
This behavior might be acceptable if you don't have a working system without that directory (say, for example, if you are remotely mounting your /home directory). However, in many cases, you might just want the system to put the mount request in the background and continue booting the machine.
To have a mount request go into the background after the first mount request times out, add the bg option to the mount options in the /etc/fstab for that entry. To explicitly continue to try the mount in the foreground, you can add the fg option to the mount options in the /etc/fstab entry.
Tools like the showmount and nfsstat commands can help you monitor the behavior of your NFS server. Here are some examples of ways you can check out what's happening with your NFS server:
Check who is mounting your directories-You can find out which host computers are mounting your shared NFS directories and which directories they are mounting using the showmount command as follows:
#showmount -a All mount points on daylight.linuxtoys.net: toys:/usr/local/share/television duck:/root/packages/FC2
To see just the host names of computers that are mounting your shared directories, type the following:
# showmount Hosts on daylight.linuxtoys.net: toys duck
To see just the directories that are being shared, type the following:
# showmount -d Directories on daylight.linuxtoys.net: /usr/local/share/television /root/packages/FC2
Check available disk space on remote NFS directories-To find out how much disk space is available on any NFS directories that you have currently mounted, you can use the df command along with the NFS file type (-F NFS) and request for human-readable form (-h):
# df -hF nfs Filesystem Size Used Avail Use% Mounted on daylight:/home/chris 16G 13G 1.7G 89% /var/daylight-chris
This output shows the disk space (size, used and available) for all remote NFS shared directories that are mounted on the local system. It also shows the computer and directory name for each remote file system and where on the local system the remote directory is mounted.
Monitoring NFS calls-Using the nfsstat command, you can monitor the activity of various parts of the NFS service. The nfsstat command displays statistics on data being written in both client and server services related to NFS. Using the -o option, nfsstat can display information for the following services related to NFS:
-o nfs-Displays NFS statistics based on each RPC call made to the NFS service. (RPC, which stands for remote procedure calls, is a method created by Sun Microsystems for making network requests and transferring data.)
-o rpc-Displays general RPC call information.
-o net-Displays network layer statistics, related to lower level data transfer.
-o fh-Displays file handle data, such as file look-ups and cache hits.
-o rc-Displays information relating to hits on the NFS server's request reply cache.
The following is an example of the nfsstat command showing NFS client statistics:
# nfsstat -c Client rpc stats: calls retrans authrefrsh 145 4 0 Client nfs v2: null getattr setattr root lookup readlink 0 0% 49 62% 0 0% 0 0% 18 22% 3 3% read wrcache write create remove rename 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% link symlink mkdir rmdir readdir fsstat 0 0% 0 0% 0 0% 0 0% 0 0% 9 11% Client nfs v3: null getattr setattr lookup access readlink 0 0% 38 57% 0 0% 1 1% 6 9% 0 0% read write create mkdir symlink mknod 12 18% 0 0% 0 0% 0 0% 0 0% 0 0% remove rmdir rename link readdir readdirplus 0 0% 0 0% 0 0% 0 0% 0 0% 2 3% fsstat fsinfo pathconf commit 4 6% 3 4% 0 0% 0 0%
Under the rpc stats, you can see that 145 RPC calls were made and there were 4 retransmissions. If the number of retransmissions goes over 5 percent of the total number of calls, it indicates that the server is having trouble keeping up with your requests for data or that your requests are simply not reaching it.
NFS version 2 and version 3 calls reflect the type of activity going on between the NFS client and the NFS servers it is accessing. Many of the attributes here are those you would expect when reading, writing, and accessing file systems. Here is an example of nfsstat statistics for an NFS server:
# nfsstat -s Server rpc stats: calls badcalls badauth badclnt xdrcall 69997 0 0 0 0 Server nfs v2: null getattr setattr root lookup readlink 1 100% 0 0% 0 0% 0 0% 0 0% 0 0% read wrcache write create remove rename 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% link symlink mkdir rmdir readdir fsstat 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% Server nfs v3: null getattr setattr lookup access readlink 1 0% 714 1% 64 0% 379 0% 870 1% 0 0% read write create mkdir symlink mknod 67523 96% 252 0% 58 0% 0 0% 0 0% 0 0% remove rmdir rename link readdir readdirplus 58 0% 0 0% 21 0% 0 0% 30 0% 0 0% fsstat fsinfo pathconf commit 12 0% 12 0% 0 0% 2 0%
The server statistics (nfsstat -s) reflect the result of calls made to the server related to NFS requests. Read and write requests reflect the amount of data being read and written to the NFS server. In the rpc stats, look for badcalls (which indicate requests that were rejected by the RPC layer) and badauth (which indicates requests for authentication that were rejected by the server).