Virtual File Systems, Swap Space, and Core Dumps
Physical memory is supplemented by specially configured space on the physical disk known as swap. Swap is configured either on a special disk partition known as a swap partition or on a swap file system. In addition to swap partitions, special files called swap files can also be configured in existing UFSs to provide additional swap space when needed. The Solaris virtual memory system provides transparent access to physical memory, swap, and memory-mapped objects.
The swap command is used to add, delete, and monitor swap files. The options for swap are shown in Table 26.
Table 26. swap Command Options
Adds a specified swap area. You can also use the script /sbin/swapadd to add a new swap file.
Deletes a specified swap area.
Displays the location of your systems' swap areas.
Displays a summary of the system's swap space.
The Solaris installation program automatically allocates 512 Megabytes of swap if a specific value is not specified.
Core File and Crash Dump Configuration
Core files are created when a program, or application, terminates abnormally. The default location for a core file to be written is the current working directory.
Core files are managed using the coreadm command. When entered with no options, coreadm displays the current configuration, as specified by /etc/coreadm.conf. The options are shown in Table 27.
Table 27. coreadm Syntax
Set the global core file name pattern.
Set the global core file content using one of the description tokens.
Set the per-process core file name pattern.
Set the per-process core file name to content.
Disable the specified core file option.
Enable the specified core file option.
Sets the per-process core file name pattern for each of the specified pids.
Sets the per-process core file content to content.
Update the systemwide core file options from the configuration file /etc/coreadm.conf.
Core file names can be customized using a number of embedded variables. Table 28 lists the possible patterns:
Table 28. coreadm Patterns
The Process ID (PID)
Effective User ID
Effective Group ID
Specifies the executable file directory name.
System node name (same as running uname -n)
Machine name (same as running uname -m)
Decimal value of time (number of seconds since 00:00:00 January 1 1970)
Specifies the name of the zone in which the process executed (zonename)
A literal '%' character
A crash dump is a snapshot of the kernel memory, saved on disk, at the time a fatal system error occurred. When a serious error is encountered, the system displays an error message on the console, dumps the contents of kernel memory by default, and then reboots the system.
Normally, crash dumps are configured to use the swap partition to write the contents of memory. The savecore program runs when the system reboots and saves the image in a predefined location, usually /var/crash/<hostname> where <hostname> represents the name of your system.
Configuration of crash dump files is carried out with the dumpadm command. Running this command with no options will display the current configuration by reading the file /etc/dumpadm.conf.
dumpadm options are shown in Table 29.
Table 29. dumpadm Options
Modify crash dump content; valid values are kernel (just kernel pages), all (all memory pages), and curproc (kernel pages and currently executing process pages).
Modify the dump device. This can be specified either as an absolute pathname (such as /dev/dsk/c0t0d0s3) or the word swap when the system will identify the best swap area to use.
Maintain minimum free space in the current savecore directory, specified either in kilobytes, megabytes, or a percentage of the total current size of the directory.
Disable savecore from running on reboot. This is not recommended as any crash dumps would be lost.
Specify a different root directory. If this option is not used, the default "/" is used.
Specify a different savecore directory, instead of the default /var/crash/hostname.
Enable savecore to run on the next reboot. This setting is used by default.
The gcore command can be used to create a core image of a specified running process. By default, the resulting file will be named core.<pid>, where <pid> is the pid of the running process.
gcore options are shown in Table 30.
Table 30. gcore Options
Produces image files with the specified content. This uses the same tokens as coreadm, but cannot be used with the -p or -g options.
Force. This option grabs the specified process even if another process has control.
Produces core image files in the global core file repository, using the global content that was configured with coreadm.
Specify filename to be used instead of core as the first part of the name of the core image files.
Produces process-specific core image files, with process-specific content, as specified by coreadm.
Network File System (NFS)
The NFS service allows computers of different architectures, running different operating systems, to share file systems across a network. Just as the mount command lets you mount a file system on a local disk, NFS lets you mount a file system that is located on another system anywhere on the network. The NFS service provides the following benefits:
Lets multiple computers use the same files so that everyone on the network can access the same data. This eliminates the need to have redundant data on several systems.
Reduces storage costs by having computers share applications and data.
Provides data consistency and reliability because all users can read the same set of files.
Makes mounting of file systems transparent to users.
Makes accessing remote files transparent to users.
Supports heterogeneous environments.
Reduces system administration overhead.
Solaris 10 introduced NFS version 4, which has the following features:
The UID and GID are represented as strings, and a new daemon, nfs4mapid, provides the mapping to numeric IDs.
The default transport for NFS version 4 is the Remote Direct Memory Access (RDMA) protocol, a technology for memory-to-memory transfer over high speed data networks.
All state and lock information is destroyed when a file system is unshared. In previous versions of NFS, this information was retained.
NFS4 provides a pseudo file system to give clients access to exported objects on the NFS server.
NFS4 is a stateful protocol where both the client and server hold information about current locks and open files. When a failure occurs, the two work together to re-establish the open, or locked files.
NFS4 no longer uses the mountd, statd, or nfslogd daemons.
NFS4 supports delegation, which allows the management responsibility of a file to be delegated to the client. Both the server and client support delegation. A client can be granted a read delegation, which can be granted to multiple clients, or a write delegation, providing exclusive access to a file.
NFS uses a number of daemons to handle its services. These services are initialized at startup from the svc:/network/nfs/server:default and svc:/network/nfs/client:default startup service management functions. The most important NFS daemons are outlined in Table 31.
Table 31. NFS Daemons
This daemon handles file system exporting and file access requests from remote systems. An NFS server runs multiple instances of this daemon. This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier.
This daemon handles mount requests from NFS clients. This daemon also provides information about which file systems are mounted by which clients. Use the showmount command to view this information. This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier. This daemon is not used in NFS version 4.
This daemon runs on the NFS server and NFS client, and provides file-locking services in NFS. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone.
This daemon runs on the NFS server and NFS client, and interacts with lockd to provide the crash and recovery functions for the locking services on NFS. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone. This daemon is not used in NFS version 4.
This daemon facilitates the initial connection between the client and the server.
A new daemon that maps to and from NFS v4 owner and group identification and UID and GID numbers. It uses entries in the passwd and group files to carry out the mapping, and also references /etc/nsswitch.conf to determine the order of access.
A new client side daemon that listens on each transport and manages the callback functions to the NFS server.
This daemon provides operational logging to the Solaris NFS server. NFS logging uses the configuration file /etc/nfs/nfslog.conf. The nfslogd daemon is not used in NFS version 4.
When a network contains even a moderate number of systems, all trying to mount file systems from each other, managing NFS can quickly become a nightmare. The Autofs facility, also called the automounter, is designed to handle such situations by providing a method in which remote directories are mounted only when they are being used.
When a request is made to access a file system at an Autofs mount point, the system goes through the following steps:
Autofs intercepts the request.
Autofs sends a message to the automountd daemon for the requested file system to be mounted.
automountd locates the file system information in a map and performs the mount.
Autofs allows the intercepted request to proceed.
unmounts the file system after a period of inactivity