TOC PREV NEXT INDEX

LynxOS Networking Guide


NFS

During initial installation of LynxOS, NFS (Network File System) support is installed with the installit utility. NFS can also be configured, installed, or removed at any time after initial installation. Note that NFS requires TCP/IP to function. This chapter describes NFS basics, as well as advanced NFS configuration options.

Overview

NFS is a suite of user programs and kernel functionality that allow access to a remote host's filesystems as if they were local. All or part of a remote host's file system is mounted into the local host's file system, allowing transparent access to remote files by local users. Once mounted, any file on the remote file system is accessible. Such files can be operated on by most utilities, functioning no differently than a file located on the local disk.

In addition to basic file system accesses over NFS, LynxOS also supports:

NFS client-side caching supports the caching of directory entries, file attributes and file data information depending on the type of access. File locking support facilitates advisory record locking between cooperating processes over NFS. This facility also supports monitored class locking, and is able to recover from NFS server/client crashes.

NFS software is divided into two parts:

When attempting to access a file in an NFS-mounted directory, the NFS client sends a request to the NFS server on the remote system. The NFS server accepts and manages these requests from the remote NFS client for access to the local disk. The server enforces permissions and performs the actual manipulations to the local disk.

Installing/Removing NFS Support

NFS can be installed and/or removed after initial installation of LynxOS. Note that NFS requires TCP/IP. The Install.nfs and Uninstall.nfs scripts are located in /usr/bin.

Installing NFS

To install NFS:

# Install.nfs

Removing NFS

To remove NFS:

# Uninstall.nfs

These installation and uninstallation scripts automatically install and configure nfs for LynxOS.

Tuning the NFS Server Kernel

The NFS server is tuned by increasing or decreasing the values of five parameters. These parameters are in the structure nfssvc_info located in /sys/devices/nfssvc_info.c.

To change any of these parameters, edit the file /sys/devices/nfssvc_info.c. After editing, the device library must be updated and the kernel rebuilt.

Note: The /etc/exports file specifies the directories to be exported and the corresponding access list.

NFS Server Tunable Parameters

Tunable parameters include:

The default is 16. If the system is used as a file server, this parameter may need to be increased.

The default is 16.

The default is 16. For security reasons, this parameter can be decreased.

The default is 16. For security reasons, this parameter can be decreased.

The default is 8. If the system is used as a file server, multiple daemons should be started. This is done by adding a count parameter to the line /net/nfsd in /net/rc.network. For example, to start three NFS server daemons, modify the line to read: /net/nfsd 3.

Tuning the NFS Client Kernel

The NFS client is tuned by changing the values of six kernel parameters. The structure unfs_info, in /sys/devices/nfsinfo.c, contains six tunable parameters:

The default is 64. The value of this parameter should be increased for heavy NFS traffic.

The default is 8. If this value is increased, make sure that NMOUNTS in /sys/lynx.os/uparam.h is also increased to an equal or greater value.

The default is 32. In case of heavy NFS client traffic, multiple client daemons should be started. This can be done by duplicating the /net/unfsio line in /net/rc.network.

The default is 32. The value of this parameter should be increased for heavy NFS traffic.

The default is 8192. This value should be reduced to 4096 or less to interface with systems that have slower (i.e. 8-bit) Ethernet boards.

Edit the /sys/devices/nfsinfo.c file to change any of these parameters. Be sure to change only the values. After making the desired changes, the device library must be updated, and the kernel rebuilt. The LynxOS User's Guide describes how to update and rebuild the kernel.

Tuning the NFS Client-Side Cache

The amount of memory allocated for NFS client-side caching can be tuned by increasing or decreasing the values of the various fields in the nfs_cache_info structure in /sys/devices/nfs_cache_info.c file.

nfs_cache_info.c example file
struct nfs_cache_info {
int num_attrcache; /* num attribute cache blocks */
int num_dnlchdrs; /* num of dir name lookup
cache headers */
int size_dnlcblk; /* size of each dnlc cache block */
int num_datacache; /* num data cache headers */
int size_datacache; /* amount memory for data cache */
int hash_tblsize; /* hash table size for data/dnlc
blocks */
/* default mount-time options */
int blksize; /* caching block size */
int dis_cache; /* disable all caching */
int dis_datacache; /* disable file data caching */
int acregmin; /* min time in secs for reg files */
int acregmax; /* max time in secs for reg files */
int acdirmin; /* min time in secs for dir files */
int acdirmax; /* max time in secs for dir files */
};

The various fields in this structure are explained below:

nfs_cache_info struct Parameters  
Struct Parameter
Description
num_attrcache
This parameter controls the total number of simultaneous file/directory entries that are allowed to be cached in this client. Each entry for this parameter consumes 180 bytes. Default value for this field is 512 (total memory consumption 512*180 bytes).
num_dnlchdrs
This parameter controls the maximum number of directory name lookup buffers that are cached in the client. Each entry corresponding to this field consumes 32 bytes. The default value for this field is 128 (total memory consumption 32*128 bytes). Depending on the size of the directory files, more than one entry may be used for each directory.
size_dnlcblk
This parameter controls the amount of buffer allocated for each directory name lookup cache header; used to store the file name and its NFS File Handle. The default value for this field is 512. Each file name entry associated with the directory takes 40+FileNameLen bytes.
num_datacache
This parameter indicates the number of data cache headers allocated for the NFS cache. Effectively, this parameter controls the total number of data blocks (of variable size) that can be cached by the driver. Each entry corresponding to this field consumes a total of 96 bytes. The default value for this field is 128 (total memory consumption 128*96 bytes).
size_datacache
This parameter controls the total amount of memory in kilobytes that are allocated for data caching. The default value for this field is 512 (memory usage 512KB.)
hash_tblsize
This parameter controls the size of the hash table associated with each entry of the file attribute cache. Thus, each file attribute entry has hash_tblsize number of pointers to the associated directory name lookup entries (in case of directory files) or data cache entries (for regular files.)

For directory files, the hashing is based on the file names associated with that directory. For regular files, the hashing is based on the block number of the data block. The default value of this field is 32 (32*4 bytes.)
blksize
This parameter determines the default logical blocksize for data caching. This field is overridden by the read/write block size given at NFS mount time, for that mount. The default value for this field is 8192.
dis_cache
This field, if non-zero, disables system-wide NFS caching. The default value for this field is zero and should remain zero.
dis_datacache
This field, if set to non-zero, disables NFS data caching in the system. This field should remain zero.
acregmin
This field specifies the minimum time in seconds that the attribute cache entries are held with the client after the last modification, before enqueueing for updating with the NFS server. This field applies for regular files. The default value for this field is 10.
acregmax
This field specifies the maximum time in seconds that the file attributes are expected to be the same as the NFS client. Beyond this period, the attribute cache is discarded and is used for refreshing. The default value for this field is 60 seconds.
acdirmin
This field specifies the minimum amount of time in seconds, that the directory file attributes are considered to be valid. The attributes are ready for refresh beyond this duration. The default value for this field is 30 seconds.
acdirmax
This field specifies the maximum time in seconds that the directory file attributes are expected to the same. After this time period, the attribute cache entry for the files in this directory, if needed, are refreshed from the NFS server. The default value for this field is 60 seconds.

Once the desired values for the tuning parameters are updated in /sys/devices/nfs_cache_info.c, the device library must be updated and the kernel rebuilt. These steps are described in "Customizing LynxOS" in the LynxOS User's Guide.

Tuning NFS File Locking

The lock device driver facilitates advisory file locking over NFS. Advisory file locking works with only co-operating processes. File locking is supported using fcntl() and lockf() system calls. This device driver also supports monitored class locking in which all the locks are monitored from both client and server side. The file /sys/devices/lock_info.c controls the number of NFS file locking client requests that are enqueued in the driver.

struct lock_info {
int nports; /* max requests that can be enqueued */
};

Each entry that corresponds to the request queue takes 3068 bytes. The default value for this field is set as 10 (total memory allocated 10*3068 bytes).

Once the desired parameter value is updated in the file /sys/devices/lockinfo.c, the device library must be updated and the kernel rebuilt. These steps are described in "Customizing LynxOS" in the LynxOS User's Guide.

Configuring the NFS Server

The only file that must be modified to allow other systems to access data on the system is /etc/exports. This file is a database used by the NFS server to determine if the requesting host is authorized to share the system's data. The syntax for each entry is as follows:

<directory> [option] [,option]...

The first field is the directory that is to be exported. If no options are given, any host may mount this directory and access the files for reading and writing.

Access to the directory may be given only to specific hosts. In the following example, the directory /mydata is exported for access by hosts shark and orca.

/mydata access=shark:orca

Permission to access an NFS-mounted directory as root must be explicitly declared. An attempt by a remote system to write to an NFS-mounted directory as root fails, even if the directory is mounted read/write. To allow the remote system orca to have root access to the directory /mydata, the following line should be added to /etc/exports:

/mydata root=orca

All directories are exported as read/write unless otherwise specified. The -ro flag is used to export a directory as read-only to everyone. The -rw flag is used to export a directory as read/write to specific users; all other users have read-only access. In the following example, /mydata is exported read-only.

/mydata ro

To restrict the read/write access of /mydata to only the hosts shark and orca, but allow read-only access to everyone else, the following entry would be added to /etc/exports:

/mydata rw=shark:orca

Any of the previous examples may be grouped, giving multiple accesses:

/mydata access=shark:orca:fish root=shark,rw=shark:orca

In the above example, access of the exported directory is limited to hosts shark, orca, and fish. Only users with root access on shark have root access to the exported directory, and only users on shark and orca have read/write capabilities. Users on fish can access the files only in a read-only state.



LynuxWorks, Inc.
855 Branham Lane East
San Jose, CA 95138
http://www.lynuxworks.com
1.800.255.5969
TOC PREV NEXT INDEX