![]() |
|
||||
LynxOS Networking Guide |
NFS
During initial installation of LynxOS, NFS (Network File System) support is installed with the installit utility. NFS can also be configured, installed, or removed at any time after initial installation. Note that NFS requires TCP/IP to function. This chapter describes NFS basics, as well as advanced NFS configuration options.
Overview
NFS is a suite of user programs and kernel functionality that allow access to a remote host's filesystems as if they were local. All or part of a remote host's file system is mounted into the local host's file system, allowing transparent access to remote files by local users. Once mounted, any file on the remote file system is accessible. Such files can be operated on by most utilities, functioning no differently than a file located on the local disk.
In addition to basic file system accesses over NFS, LynxOS also supports:
NFS client-side caching supports the caching of directory entries, file attributes and file data information depending on the type of access. File locking support facilitates advisory record locking between cooperating processes over NFS. This facility also supports monitored class locking, and is able to recover from NFS server/client crashes.
NFS software is divided into two parts:
When attempting to access a file in an NFS-mounted directory, the NFS client sends a request to the NFS server on the remote system. The NFS server accepts and manages these requests from the remote NFS client for access to the local disk. The server enforces permissions and performs the actual manipulations to the local disk.
Installing/Removing NFS Support
NFS can be installed and/or removed after initial installation of LynxOS. Note that NFS requires TCP/IP. The Install.nfs and Uninstall.nfs scripts are located in /usr/bin.
Installing NFS
Removing NFS
These installation and uninstallation scripts automatically install and configure nfs for LynxOS.
Tuning the NFS Server Kernel
The NFS server is tuned by increasing or decreasing the values of five parameters. These parameters are in the structure nfssvc_info located in /sys/devices/nfssvc_info.c.
To change any of these parameters, edit the file /sys/devices/nfssvc_info.c. After editing, the device library must be updated and the kernel rebuilt.
NFS Server Tunable Parameters
The default is 16. If the system is used as a file server, this parameter may need to be increased.
The default is 16. For security reasons, this parameter can be decreased.
- The maximum number of hosts that can be specified with read and write access for an exported directory.
The default is 16. For security reasons, this parameter can be decreased.
The default is 8. If the system is used as a file server, multiple daemons should be started. This is done by adding a count parameter to the line /net/nfsd in /net/rc.network. For example, to start three NFS server daemons, modify the line to read: /net/nfsd 3.
Tuning the NFS Client Kernel
The NFS client is tuned by changing the values of six kernel parameters. The structure unfs_info, in /sys/devices/nfsinfo.c, contains six tunable parameters:
The default is 64. The value of this parameter should be increased for heavy NFS traffic.
The default is 8. If this value is increased, make sure that NMOUNTS in /sys/lynx.os/uparam.h is also increased to an equal or greater value.
The default is 32. In case of heavy NFS client traffic, multiple client daemons should be started. This can be done by duplicating the /net/unfsio line in /net/rc.network.
The default is 32. The value of this parameter should be increased for heavy NFS traffic.
The default is 8192. This value should be reduced to 4096 or less to interface with systems that have slower (i.e. 8-bit) Ethernet boards.
Edit the /sys/devices/nfsinfo.c file to change any of these parameters. Be sure to change only the values. After making the desired changes, the device library must be updated, and the kernel rebuilt. The LynxOS User's Guide describes how to update and rebuild the kernel.
Tuning the NFS Client-Side Cache
The amount of memory allocated for NFS client-side caching can be tuned by increasing or decreasing the values of the various fields in the nfs_cache_info structure in /sys/devices/nfs_cache_info.c file.
The various fields in this structure are explained below:
Once the desired values for the tuning parameters are updated in /sys/devices/nfs_cache_info.c, the device library must be updated and the kernel rebuilt. These steps are described in "Customizing LynxOS" in the LynxOS User's Guide.
Tuning NFS File Locking
The lock device driver facilitates advisory file locking over NFS. Advisory file locking works with only co-operating processes. File locking is supported using fcntl() and lockf() system calls. This device driver also supports monitored class locking in which all the locks are monitored from both client and server side. The file /sys/devices/lock_info.c controls the number of NFS file locking client requests that are enqueued in the driver.
Each entry that corresponds to the request queue takes 3068 bytes. The default value for this field is set as 10 (total memory allocated 10*3068 bytes).
Once the desired parameter value is updated in the file /sys/devices/lockinfo.c, the device library must be updated and the kernel rebuilt. These steps are described in "Customizing LynxOS" in the LynxOS User's Guide.
Configuring the NFS Server
The only file that must be modified to allow other systems to access data on the system is /etc/exports. This file is a database used by the NFS server to determine if the requesting host is authorized to share the system's data. The syntax for each entry is as follows:
The first field is the directory that is to be exported. If no options are given, any host may mount this directory and access the files for reading and writing.
Access to the directory may be given only to specific hosts. In the following example, the directory /mydata is exported for access by hosts shark and orca.
Permission to access an NFS-mounted directory as root must be explicitly declared. An attempt by a remote system to write to an NFS-mounted directory as root fails, even if the directory is mounted read/write. To allow the remote system orca to have root access to the directory /mydata, the following line should be added to /etc/exports:
All directories are exported as read/write unless otherwise specified. The -ro flag is used to export a directory as read-only to everyone. The -rw flag is used to export a directory as read/write to specific users; all other users have read-only access. In the following example, /mydata is exported read-only.
To restrict the read/write access of /mydata to only the hosts shark and orca, but allow read-only access to everyone else, the following entry would be added to /etc/exports:
Any of the previous examples may be grouped, giving multiple accesses:
In the above example, access of the exported directory is limited to hosts shark, orca, and fish. Only users with root access on shark have root access to the exported directory, and only users on shark and orca have read/write capabilities. Users on fish can access the files only in a read-only state.
![]() LynuxWorks, Inc. 855 Branham Lane East San Jose, CA 95138 http://www.lynuxworks.com 1.800.255.5969 |
![]() |
![]() |
![]() |
![]() |