Previous Page
Next Page



Explain NFS fundamentals, and configure and manage the NFS server and client including daemons, files, and commands.

  • Troubleshoot various NFS errors.

The NFS service lets computers of different architectures, running different operating systems, share file systems across a network. Just as the mount command lets you mount a file system on a local disk, NFS lets you mount a file system that is located on another system anywhere on the network. Furthermore, NFS support has been implemented on many platforms, ranging from MS-DOS on personal computers to mainframe operating systems, such as Multiprogramming using Virtual Storage (MVS). Each operating system applies the NFS model to its file system semantics. For example, a Sun system can mount the file system from a Windows NT or Linux system. File system operations, such as reading and writing, function as though they are occurring on local files. Response time might be slower when a file system is physically located on a remote system, but the connection is transparent to the user regardless of the hardware or operating systems.

The NFS service provides the following benefits:

  • Lets multiple computers use the same files so that everyone on the network can access the same data. This eliminates the need to have redundant data on several systems.

  • Reduces storage costs by having computers share applications and data.

  • Provides data consistency and reliability because all users access the same data.

  • Makes mounting of file systems transparent to users.

  • Makes accessing remote files transparent to users.

  • Supports heterogeneous environments.

  • Reduces system administration overhead.

The NFS service makes the physical location of the file system irrelevant to the user. You can use NFS to allow users to see all the data, regardless of location. With NFS, instead of placing copies of commonly used files on every system, you can place one copy on one computer's disk and have all other systems across the network access it. Under NFS operation, remote file systems are almost indistinguishable from local ones.

NFS Version 4

Solaris 10 introduced a new version of the NFS protocol, which has the following features:

  • The User ID and Group ID are represented as strings. A new daemon process, nfsmapid, maps these IDs to local numeric IDs. The nfsmapid daemon is described later in this chapter, in the section "NFS Daemons."

  • The default transport for NFS version 4 is the Remote Direct Memory Access (RDMA) protocol, a technology for memory-to-memory transfer over high speed data networks. RDMA improves performance by reducing load on the CPU and I/O. If RDMA is not available on both server and client then TCP is used as the transport.

  • All state and lock information is destroyed when a file system is unshared. In previous versions of NFS, this information was retained.

  • NFS version 4 provides a pseudo file system to give clients access to exported objects on the NFS server.

  • NFS version 4 is a stateful protocol in that both the client and the server hold information about current locks and open files. When a crash or failure occurs, the client and the server work together to re-establish the open or locked files.

  • NFS version 4 no longer uses the mountd, statd, or nfslogd daemons.

  • NFS version 4 supports delegation, a technique where management responsibility of a file can be delegated by the server to the client. Delegation is supported in both the NFS server and the NFS client. A client can be granted a read delegation, which can be granted to multiple clients, or a write delegation, providing exclusive access to a file.

Servers and Clients

With NFS, systems have a client/server relationship. The NFS server is where the file system resides. Any system with a local file system can be an NFS server. As described later in this chapter, in the section "Setting Up NFS," you can configure the NFS server to make file systems available to other systems and users. The system administrator has complete control over which file systems can be mounted and who can mount them.

An NFS client is a system that mounts a remote file system from an NFS server. You'll learn later in this chapter, in the section "Mounting a Remote File System," how you can create a local directory and mount the file system. As you will see, a system can be both an NFS server and an NFS client.

NFS Daemons

NFS uses a number of daemons to handle its services. These services are initialized at startup from the svc:/network/nfs/server:default and svc:/network/nfs/client:default startup service management functions. The most important NFS daemons are described in Table 9.8.

Table 9.8. NFS Daemons




An NFS server daemon that handles file system exporting and file access requests from remote systems. An NFS server runs multiple instances of this daemon. This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier.


An NFS server daemon that handles mount requests from NFS clients. This daemon provides information about which file systems are mounted by which clients. You use the showmount command, described later in this chapter, to view this information. This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier. This daemon is not used in NFS version 4.


A daemon that runs on the NFS server and NFS client and provides file-locking services in NFS. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone.


A daemon that runs on the NFS server and NFS client and interacts with lockd to provide the crash and recovery functions for the locking services on NFS. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone. This daemon is not used in NFS version 4.


A daemon that facilitates the initial connection between the client and the server.


A new daemon that maps to and from NFS v4 owner and group identification and UID and GID numbers. It uses entries in the passwd and group files to carry out the mapping, and also references /etc/nsswitch.conf to determine the order of access.


A new client side daemon that listens on each transport and manages the callback functions to the NFS server.


A daemon that provides operational logging to the Solaris NFS server. nfslogd is described later in this chapter, in the section "NFS Server Logging." The nfslogd daemon is not used in NFS version 4.

Setting Up NFS

Servers let other systems access their file systems by sharing them over the NFS environment. A shared file system is referred to as a shared resource. You specify which file systems are to be shared by entering the information in the file /etc/dfs/dfstab. Entries in this file are shared automatically whenever you start the NFS server operation. You should set up automatic sharing if you need to share the same set of file systems on a regular basis. Most file system sharing should be done automatically; the only time manual sharing should occur is during testing or troubleshooting.

The /etc/dfs/dfstab file lists all the file systems your NFS server shares with its NFS clients. It also controls which clients can mount a file system. If you want to modify /etc/dfs/dfstab to add or delete a file system or to modify the way sharing is done, you edit the file with a text editor, such as vi. The next time the computer enters the multi-user-server milestone, the system reads the updated /etc/dfs/dfstab to determine which file systems should be shared automatically.

Each line in the dfstab file consists of a share command, as shown in the following example:

more /etc/dfs/dfstab

The system responds by displaying the contents of /etc/dfs/dfstab:

#       Place share(1M) commands here for automatic execution
#       on entering init state 3.
#       Issue the command 'svcadm enable network/nfs/server' to
#       run the NFS daemon processes and the share commands, after adding
#       the very first entry to this file.
#       share [-F fstype] [ -o options] [-d "<text>"] <pathname> \
#       .e.g,
#       share  -F nfs  -o rw=engineering  -d "home dirs" /export/home2
share -F nfs /export/install/sparc_10
share -F nfs /jumpstart

The /usr/sbin/share command exports a resource or makes a resource available for mounting. If it is invoked with no arguments, share displays all shared file systems. The share command can be run at the command line to achieve the same results as the /etc/dfs/dfstab file, but you should use this method only when testing.

This is the syntax for the share command:

share -F <FSType> -o <options> -d <description> <pathname>

where <pathname> is the name of the file system to be shared. Table 9.9 describes the options of the share command.

Table 9.9. share Command Syntax



-F <FSType>

Specifies the file system type, such as NFS. If the -F option is omitted, the first file system type listed in /etc/dfs/fstypes is used as the default (nfs).

-o <options>

Is one of the following options:

rwMakes pathname shared read-write to all clients. This is also the default behavior.

rw=client[:client]...Makes pathname shared read-write but only to the listed clients. No other systems can access pathname.

roMakes pathname shared read-only to all clients.

ro=client[:client]...Makes pathname shared read-only, but only to the listed clients. No other systems can access pathname.

aclokAllows the NFS server to do access control for NFS version 2 clients (running Solaris 2.4 or earlier). When aclok is set on the server, maximum access is given to all clients. For example, with aclok set, if anyone has read permissions, everyone does. If aclok is not set, minimal access is given to all clients.

anon=<uid>Sets uid to be the effective user ID (UID) of unknown users. By default, unknown users are given the effective UID nobody. If uid is set to -1, access is denied.

index=<file>Loads a file rather than a listing of the directory containing this specific file when the directory is referenced by an NFS uniform resource locator (URL).

nosubPrevents clients from mounting subdirectories of shared directories. This only applies to NFS versions 2 and 3 because NFS version 4 does not use the Mount protocol.

nosuidCauses the server file system to silently ignore any attempt to enable the setuid or setgid mode bits. By default, clients can create files on the shared file system if the setuid or setgid mode is enabled. See Chapter 4 for a description of setuid and setgid.

publicEnables NFS browsing of the file system by a WebNFS-enabled browser. Only one file system per server can use this option. The -ro=list and -rw=list options can be included with this option.

root=host[:host]...Specifies that only root users from the specified hosts have root access. By default, no host has root access, so root users are mapped to an anonymous user ID (see the description of the anon=<uid> option).

sec=<mode>Uses one or more of the security modes specified by <mode> to authenticate clients. The <mode> option establishes the security mode of NFS servers. If the NFS connection uses the NFS version 3 protocol, the NFS clients must query the server for the appropriate <mode> to use. If the NFS connection uses the NFS version 2 protocol, the NFS client uses the default security mode, which is currently sys. NFS clients can force the use of a specific security mode by specifying the sec=<mode> option on the command line. However, if the file system on the server is not shared with that security mode, the client may be denied access. The following are valid modes:

sysUse AUTH_SYS authentication. The user's Unix user ID and group IDs are passed in clear text on the network, unauthenticated by the NFS server.

dhUse a Diffie-Hellman public key system.

krb5Use the Kerberos version 5 authentication.

krb5iUse the Kerberos version 5 authentication with integrity checking to verify that the data has not been compromised.

krb5pUse the Kerberos version 5 authentication with integrity checking and privacy protection (encryption). This is the most secure, but also incurs additional overhead.

noneUse null authentication.

log=<tag>Enables NFS server logging for the specified file system. The optional <tag> determines the location of the related log files. The tag is defined in etc/nfs/nfslog.conf. If no tag is specified, the default values associated with the global tag in /etc/nfs/nfslog.conf are used. NFS logging is described later in this chapter, in the section "NFS Server Logging." Support for NFS logging is only available for NFS versions 2 and 3.

-d <description>

Provides a description of the resource being shared.

To share a file system as read-only every time the system is started up, you add this line to the /etc/dfs/dfstab file:

share -F nfs -o ro /data1

After you edit the /etc/dfs/dfstab file, restart the NFS server by either rebooting the system or by typing this:

svcadm restart nfs/server

You need to run the svcadm enable nfs/server command only after you make the first entry in the /etc/dfs/dfstab file. This is because at startup, when the system enters the multi-user-server milestone, mountd and nfsd are not started if the /etc/dfs/dfstab file is empty. After you have made an initial entry and have executed the svcadm enable nfs/server command, you can modify /etc/dfs/dfstab without restarting the daemons. You simply execute the shareall command, and any new entries in the /etc/dfs/dfstab file are shared.


Sharing Even if you share a file system from the command line by typing the share command, mountd and nfsd still won't run until you make an entry into /etc/dfs/dfstab and run the svcadm enable nfs/server command.

When you have at least one entry in the /etc/dfs/dfstab file and when both mountd and nfsd are running, you can share additional file systems by typing the share command directly from the command line. Be aware, however, that if you don't add the entry to the /etc/dfs/dfstab file, the file system is not automatically shared the next time the system is restarted.

Exam Alert

File System Sharing There is often at least one question on the exam related to the sharing of file systems. Remember that the NFS server must be running in order for the share to take effect.

The dfshares command displays information about the shared resources that are available to the host from an NFS server. Here is the syntax for dfshares:

dfshares <servername>

You can view the shared file systems on a remote NFS server by using the dfshares command, like this:

dfshares apollo

If no servername is specified, all resources currently being shared on the local host are displayed. Another place to find information on shared resources is in the server's /etc/dfs/sharetab file. This file contains a list of the resources currently being shared.

Mounting a Remote File System

Chapter 1 describes how to mount a local file system by using the mount command. You can use the same mount command to mount a shared file system on a remote host using NFS. Here is the syntax for mounting NFS file systems:

mount -F NFS <options> <-o specific-options > <-O> \
<server>:<file-system> <mount-point>

In this syntax, server is the name of the NFS server in which the file system is located, file-system is the name of the shared file system on the NFS server, and mount-point is the name of the local directory that serves as the mount point. As you can see, this is similar to mounting a local file system. The options for the mount command are described in Table 9.10.

Table 9.10. NFS mount Command Syntax




Specifies the FSType on which to operate, in this case the value will be NFS.


Mounts the specified file system as read-only.


Does not append an entry to the /etc/mnttab table of the mounted file systems.

-o <specific-options>

Can be any of the following options, separated by commas:

rw | roThe resource is mounted read-write or read-only. The default is rw.

acdirmax=nThe maximum time that cached attributes are held after directory update. The default is 60 seconds.

acdirmin=nThe minimum time that cached attributes are held after directory update. The default is 30 seconds.

acregmax=nThe maximum time that cached attributes are held after file modification. The default is 60 seconds.

acregmin=nThe minimum time that cached attributes are held after file modification. The default is 3 seconds.

actimeo=nSet minimum and maximum times for directories and regular files, in seconds.

forcedirectio | noforcedirectioIf the file system is mounted with forcedirectio, then data is transferred directly between client and server, with no buffering on the client. Using noforcedirectio causes buffering to be done on the client.

grpidThe GID of a new file is unconditionally inherited from that of the parent directory, overriding any set-GID options.

noacSuppress data and attribute caching.

noctoDo not perform the normal close-to-open consistency. This option can be used when only one client is accessing a specified file system. In this case, performance may be improved, but it should be used with caution.

suid | nosuidsetuid execution is enabled or disabled. The default is suid.

remountIf a file system is mounted as read-only, this option remounts it as read-write.

bg | fgIf the first attempt to mount the remote file system fails, this option retries it in the background (bg) or in the foreground (fg). The default is fg.

quotaThis option checks whether the user is over the quota on this file system. If the file system has quotas enabled on the server, quotas are still checked for operations on this file system.

noquotaThis option prevents quota from checking whether the user has exceeded the quota on this file system. If the file system has quotas enabled on the server, quotas are still checked for operations on this file system.

retry=nThis option specifies the number of times to retry the mount operation. The default is 10000.

vers=<NFS-version-number>By default, the version of NFS protocol used between the client and the server is the highest one available on both systems. If the NFS server does not support the NFS 4 protocol, the NFS mount uses version 2 or 3.

port=nThis option specifies the server IP port number. The default is NFS_PORT.

proto=netid | rdmaThe default transport is the first rdma protocol supported by both client and server. If no rdma, then TCP is used and failing that, UDP. Note that NFS version 4 does not use UDP, so if you specify proto=udp, then NFS version 4 will not be used.

publicForces the use of the public file handle when connecting to the NFS server.

sec=modeSet the security mode for NFS transactions. NFS versions 3 and 4 mounts negotiate a security mode. Version 3 mounts pick the first mode supported, whereas version 4 mounts try each supported mode in turn, until one is successful.

rsize=<n>This option sets the read buffer size to <n> bytes. The default value is 32768 with version 3 or 4 of the NFS protocol. The default can be negotiated down if the server prefers a smaller transfer size. With NFS version 2, the default value is 8192.

wsize=<n>This option sets the write buffer size to <n> bytes. The default value is 32768 with version 3 or 4 of the NFS protocol. The default can be negotiated down if the server prefers a smaller transfer size. With version 2, the default value is 8192.

timeo=<n>This option sets the NFS timeout to <n> tenths of a second. The default value is 11 tenths of a second for connectionless transports and 600 tenths of a second for connection-oriented transports.

retrans=<n>This option sets the number of NFS retransmissions to <n>; the default value is 5. For connection-oriented transports, this option has no effect because it is assumed that the transport will perform retransmissions on behalf of NFS.

soft | hardThis option returns an error if the server does not respond (soft), or it continues the retry request until the server responds (hard). If you're using hard, the system appears to hang until the NFS server responds. The default value is hard.

intr | nointrThis option enables or does not enable keyboard interrupts to kill a process that hangs while waiting for a response on a hard-mounted file system. The default is intr, which makes it possible for clients to interrupt applications that might be waiting for an NFS server to respond.

xattr | noxattrAllow or disallow the creation of extended attributes. The default is xattr (allow extended attributes).

-OThe overlay mount lets the file system be mounted over an existing mount point, making the underlying file system inaccessible. If a mount is attempted on a preexisting mount point and this flag is not set, the mount fails, producing the "device busy" error.

File systems mounted with the bg option indicate that mount is to retry in the background if the server's mount daemon (mountd) does not respond when, for example, the NFS server is restarted. From the NFS client, mount retries the request up to the count specified in the retry=<n> option. After the file system is mounted, each NFS request made in the kernel waits a specified number of seconds for a response (specified with the timeo=<n> option). If no response arrives, the timeout is multiplied by 2, and the request is retransmitted. If the number of retransmissions has reached the number specified in the retrans=<n> option, a file system mounted with the soft option returns an error, and the file system mounted with the hard option prints a warning message and continues to retry the request. Sun recommends that file systems mounted as read-write or containing executable files should always be mounted with the hard option. If you use soft-mounted file systems, unexpected I/O errors can occur. For example, consider a write request: If the NFS server goes down, the pending write request simply gives up, resulting in a corrupted file on the remote file system. A read-write file system should always be mounted with the specified hard and intr options. This lets users make their own decisions about killing hung processes. You use the following to mount a file system named /data located on a host named thor with the hard and intr options:

mount -F nfs -o hard,intr thor:/data /data

If a file system is mounted hard and the intr option is not specified, the process hangs when the NFS server goes down or the network connection is lost. The process continues to hang until the NFS server or network connection becomes operational. For a terminal process, this can be annoying. If intr is specified, sending an interrupt signal to the process kills it. For a terminal process, you can do this by pressing Ctrl+C. For a background process, sending an INT or QUIT signal, as follows, usually works:

kill -QUIT 3421


Overkill Won't Work Sending a KILL signal (-9) does not terminate a hung NFS process.

To mount a file system called /data that is located on an NFS server called thor, you issue the following command, as root, from the NFS client:

mount -F nfs -o ro thor:/data /thor_data

In this case, the /data file system from the server thor is mounted read-only on /thor_data on the local system. Mounting from the command line enables temporary viewing of the file system. If the umount command is issued or the client is restarted, the mount is lost. If you would like this file system to be mounted automatically at every startup, you can add the following line to the /etc/vfstab file:

thor:/data - /thor_data nfs - yes ro


Mount Permissions The mount and umount commands require root access. The umount command and /etc/vfstab file are described in Chapter 1.

To view resources that can be mounted on the local or remote system, you use the dfmounts command, as follows:

dfmounts sparcserver

The system responds with a list of file systems currently mounted on sparcserver:

  -          ultra5 /usr      
             ultra5 /usr/dt   

Sometimes you rely on NFS mount points for critical information. If the NFS server were to go down unexpectedly, you would lose the information contained at that mount point. You can address this issue by using client-side failover. With client-side failover, you specify an alternative file system to use in case the primary file system fails. The primary and alternative file systems should contain equivalent directory structures and identical files. This option is available only on read-only file systems.

To set up client-side failover, on the NFS client, mount the file system by using the -ro option. You can do this from the command line or by adding an entry to the /etc/vfstab file that looks like the following:

zeus,thor:/data  -  /remote_data  nfs  -  no  -o ro

If multiple file systems are named and the first server in the list is down, failover uses the next alternative server to access files. To mount a replicated set of NFS file systems, which might have different paths to the file system, you use the following mount command:

mount -F nfs -o ro zeus:/usr/local/data,thor:/home/data /usr/local/data

Replication is discussed further in the "AutoFS" section, later in this chapter.

NFS Server Logging

A feature that first appeared in Solaris 8 is NFS server logging (refer to Chapter 1). NFS server logging provides event and audit logging functionality to networked file systems. The daemon nfslogd provides NFS logging, and you enable it by using the log=<tag> option in the share command, as described earlier in this chapter, in the section "Setting Up NFS." When NFS logging is enabled, the kernel records all NFS operations on the file system in a buffer. The data recorded includes a timestamp, the client Internet Protocol (IP) address, the UID of the requestor, the file handle of the resource that is being accessed, and the type of operation that occurred. The nfslogd daemon converts this information into ASCII records that are stored in ASCII log files.


No Logging in NFS Version 4 Remember that NFS logging is not supported in NFS version 4.

To enable NFS server logging, follow the procedure described in Step by Step 9.5.

Step By Step 9.5: Enabling NFS Server Logging

As root, share the NFS by typing the following entry at the command prompt:

share -F nfs -o ro,log=global <file-system-name>

Add the entry above to your /etc/dfs/dfstab file if you want it to go into effect every time the server is booted.

If the nfslogd daemon is not already running, start it by entering this:


Exam Alert

NSF Server Logging Configuration You should be familiar with the concept of NFS server logging, especially the location of the configuration file (/etc/nfs/nfslog.conf). The nfs directory in the path can be easily forgotten, and you lose an exam point unnecessarily if you leave it out.

You can change the file configuration settings in the NFS server logging configuration file /etc/nfs/nfslog.conf. This file defines pathnames, filenames, and types of logging to be used by nfslogd. Each definition is associated with a tag. The global tag defines the default values, but you can create new tags and specify them for each file system you share. The NFS operations to be logged by nfslogd are defined in the /etc/default/nfslogd configuration file.


Logging Pros and Cons NFS server logging is particularly useful for being able to audit operations carried out on a shared file system. The logging can also be extended to audit directory creations and deletions. With logging enabled, however, the logs can become large and consume huge amounts of disk space. It is necessary to configure NFS logging appropriately so that the logs are pruned at regular intervals.

Previous Page
Next Page