Here are a few final words about NFS.
NFS depends on NIS or NIS+ on many machines. Both NFS and NIS implementations have had some well-known implementation flaws and bugs in recent years. Not only are these flaws well-known, there are a number of hacker toolboxes available that include programs to take advantage of these flaws. Therefore, if you are running NFS, you should be certain that you are up to date on vendor patches and bug fixes. In particular:
Make sure that your version of the RPC portmapper does not allow proxy requests and that your own system is not in the export list for a partition. Otherwise, a faked packet sent to your RPC system can be made to fool your NFS system into acting as if the packet was valid and came from your own machine.
Make sure that your NFS either uses Secure RPC, or examines the full 32 bits of the UIDS passed in. Some early versions of NFS only examined the least significant 16 bits of the passed-in UID for some tests, so accesses could be crafted that would not get mapped to nobody, and would function as root accesses.
Make sure that your version of NFS does not allow remote users to issue mknod commands on partitions they import from your servers. A user creating a new /dev/kmem file on your partition has made a big first step towards a complete compromise of your system.
Make sure that your NFS does the correct thing when someone does a cd.. in the top level of an imported directory from your server. Some older versions of NFS would return a file handle to the server's real parent directory instead of the parent to the client's mount point. Because NFS doesn't know how you get file handles, and it applies permissions on whole partitions rather than mount points, this process could lead to your server's security being compromised.
In particular, when a server would export a subdirectory as the root partition for a diskless workstation, a user on the workstation could do cd /; cd... and instead of getting the root directory again, they would have access to the parent directory on the server! Further compounding this scenario, the export of the partition needed to be done with root= access. As a result, clients would have unrestricted access to the server's disks!
Make sure that your server parses the export option list correctly. Some past and current NFS implementations get accesses mixed up. If you specify access= and rw= on the same export, or access= and root=, the system sometimes forgets the access= specification and exports the partition to every other machine in the world.
NFS and other distributed filesystems provide some wonderful functions. They are also a source of continuing headaches. You should consider the question of whether you really need all the flexibility and power of NFS and distributed systems. By reexamining your fundamental assumptions, you may find that you can reconfigure your systems to avoid NFS problems completely, by eliminating NFS.
For instance, one reason that is often given for having NFS is to easily keep software in sync on many machines at once. However, that argument was more valid before the days of high-speed local networks and cheap disks. You might be better served by equipping each workstation in your enterprise with a 2GB or 4GB disk, with a complete copy of all of your applications residing on each machine. You can use a facility such as rdist (see Chapter 9) to make necessary updates. Not only will this configuration give you better security, but it will also provide better fault tolerance: if the server or network goes down, each system has everything necessary to continue operation. This configuration also facilitates system customization.
A second argument for network filesystems is that they allow users to access their home accounts with greater ease, no matter which machine they use. But while this may make sense in a university student lab, most employees almost always use the same machine, so there is no reason to access multiple machines as if they were equivalent.
Network filesystems are sometimes used to share large databases from multiple points. But network filesystems are a poor choice for this application because locking the database and synchronizing updates is usually more difficult than sharing a single machine using remote logins. In fact, with the X Window System, opening a window on a central database machine is convenient and often as fast (or faster) than accessing the data via a network filesystem. Alternatively, you can use a database server, with client programs that are run locally.
The argument is also made that sharing filesystems over the network results in lower cost. In point of fact, such a configuration may be more expensive than the alternatives. For instance, putting high-resolution color X display terminals on each desktop and connecting them with 100MB Ethernet to a multiprocessor server equipped with RAID disk may be more cost-effective, provide better security, give better performance, and use less electricity. The result may be a system that is cheaper to buy, operate, and maintain. The only loss is the cachet of equipping each user with a top-of-the-line workstation on his desktop when all he really needs is access to a keyboard, mouse, and fast display.
Indeed, the only argument for network filesystems may be security. Today most X terminals have no support for encryption.[13] On client/server-based systems that use Kerberos or DCE, you can avoid sending unencrypted passwords and user data over the network. But be careful: you will only get the data confidentiality aspects of this approach if your remote filesystem encrypts all user data; most don't.
[13] We expect this to change in the near future.
Questioning your basic assumptions may simultaneously save you time, money, and improve your security.
This HTML Help has been published using the chm2web software. |