TOC PREV NEXT INDEX

Writing Device Drivers for LynxOS


Porting UNIX Drivers to LynxOS

This appendix discusses the similarities and differences between device drivers under LynxOS and UNIX. It is intended to serve two main purposes:

The material that follows describes a feature of a UNIX device driver and points out the corresponding feature in a LynxOS device driver. This appendix supplements the more detailed coverage of LynxOS device drivers that can be found in previous chapters of this manual. Certain LynxOS features not used in a UNIX driver are, consequently, mentioned very briefly only. The versions of the UNIX kernel referred to in this chapter are, for the most part, SVR3.2 and SVR4.

Kernel and Driver Design Goals

A frequently asked question is whether it would be possible to achieve source-level or even binary compatibility between UNIX and LynxOS drivers. While this--with some effort--might be technically feasible, the result would probably not be acceptable for designers of real-time systems.

This is because the fundamental differences in the design goals of LynxOS as compared to the UNIX kernel. The latter was designed for multi-user time-sharing systems, while LynxOS was designed specifically for hard real-time systems. These differences in design goals influence the choice of kernel data structures and algorithms, including those used in device drivers.

The differences are also seen in the services provided by the kernel to device drivers. The LynxOS kernel provides many services that meet specific requirements of real-time systems. These features would not be found in a UNIX driver. On the contrary, a UNIX driver may use some services that would result in a detrimental effect on a real-time performance.

Another significant difference is preemptability. The UNIX kernel was originally written to be uninterruptible, though some UNIX kernels now exist that are preemptive to some extent. The LynxOS kernel, including device drivers, is fully preemptive. This has a major influence on the way a driver is written.

Different design goals can also be noted at the level of the drivers themselves. UNIX drivers are generally designed to make the most efficient use of I/O devices, thereby maximizing throughput. This goal leads to the use of specific driver techniques such as the chaining of I/O requests, processing of interrupts within an interrupt handler, and the starting of the next I/O operation from within the interrupt handler. In contrast, a LynxOS driver must be designed to have a minimal impact on real-time performance, respecting the relative priorities of the tasks that are using the devices. The way in which interrupts are handled is probably the largest difference between a UNIX and a LynxOS driver.

Given these differences, both at the kernel and driver level, it is clear that in order to respect real-time demands, a port is preferred in providing compatibility.

Porting Strategy

Porting a UNIX driver can be broadly divided into three stages as follows:

- Driver interface with kernel
- Driver interface with application
- The U structure
- Reentry and synchronization
- System threads and priority tracking
- Dynamic installation
- POSIX programming model

The first stage allows the developer to reach a point where a working LynxOS driver can be tested for functionality. While enabling the re-use of a driver in a relatively short time, this initial port does not take advantage of the real-time aspects of LynxOS, and the driver could have a detrimental effect on the system response time. In order for the driver to conform to the real-time characteristics of LynxOS, the implementation of Stage Two is absolutely necessary. The features in Stage Three are optional but may be advantageous in certain situations.

Driver Structure

Overall Structure

A LynxOS and a UNIX driver are quite similar in overall structure. Each consists of a number of entry points, including an initialization routine and an interrupt handler. A LynxOS driver has, in addition, one or more kernel threads.

Table B-1:  UNIX v/s LynxOS Structure
LynxOS Driver
UNIX Driver
Initialization
Initialization
Entry points
Entry points
Interrupt handler
Interrupt handler
Kernel threads

Global Variables

A UNIX driver typically makes widespread use of global variables, which is the most common way for the driver entry points to share information. A LynxOS driver can and should be written without the use of any global variables. The LynxOS kernel provides an elegant means to communicate driver state between entry points. Use of this mechanism is essential to allow dynamic install and uninstall of a driver.

Major and Minor Device Numbers

There is an important difference in the way UNIX and LynxOS use major device numbers. Under UNIX, the major device number is used to distinguish between different drivers. The minor number distinguishes between different devices controlled by the same driver. Under LynxOS, each driver has a unique driver ID, though this number is never used by the driver code. Different devices controlled by the same driver are identified by different major numbers (as opposed to the minor number in UNIX). The use of the minor device number is defined entirely by the driver. LynxOS driver IDs and major numbers are allocated automatically during a kernel build.

Driver Interface with Kernel

The interface between the UNIX kernel and a driver is defined by the driver service calls, the init entry point, and the interrupt handler.

Driver Service Calls

The services provided by a kernel to device drivers can be grouped into several functional classes:

Memory Management

This section describes the functions used for allocating memory and for translating memory addresses.

Memory Allocation

Functions used for the allocation of memory for the driver's internal use are as follows:

Table B-2:  Internal Use Memory Allocation Functions  
LynxOS
UNIX
sysbrk, sysfree, get1page, free1page, alloc_cmem, free_cmem
kmem_alloc, kmem_free

The functions sysbrk and sysfree are the nearest equivalent to UNIX kmem_alloc and kmem_free. The UNIX function kmem_alloc can sleep while waiting for free space. The LynxOS functions never sleep, instead, they return SYSERR if the memory request cannot be satisfied immediately.

Address Translation

The functions required for converting virtual to physical addresses are as follows:

Table B-3:  Virtual to Physical Address Conversion Functions

LynxOS
UNIX
User virtual to physical
mmchain, mmchainjob
vtop
Kernel virtual to physical
mmchainjob (job 0) addr - PHYSBASE
kvtophys

Note that mmchain returns a kernel virtual address. To convert this to a physical address, the constant PHYSBASE must be subtracted.

Synchronization

In non-preemptive UNIX kernels, synchronization is a fairly straightforward matter. But in a fully preemptive kernel such as LynxOS it is much more complex. This can represent a significant portion of the porting effort. For more information, see "Synchronization"

DMA Transfers and Raw I/O

Setting up DMA transfers requires the following kernel services:

The following code fragments illustrate typical SVR4 driver code for performing a DMA transfer to user space.

UNIX

read (dev, uio)
dev_t dev;
struct uio *uio;
{
physiock (mybreak, 0, dev, B_READ, nblocks, uio);
}

mybreak (bp)
struct buf *bp;
{
dma_pageio (mystrategy, bp);
}

mystrategy (bp)
struct buf *bp;
{
physaddr = vtop (bp->b_addr, bp->b_proc);
/* start DMA transfer */
}

The key functions in the previous code fragment are:

physiock
Faults in and locks memory pages.
dma_pageio
Breaks transfer blocks into 512 byte blocks and calls strategy routine.
mystrategy
Converts user virtual address to physical address, sets up and initiates DMA transfer (user written).

LynxOS

read (s, f, buff, count)
struct statics *s;
struct file *f;
char *buff;
int count;
{
struct dmachain *array;
int np, nc, pid, i;

pid = getpid ();
np = npages (buff, count);
array = (struct dmachain *) sysbrk (np * sizeof
(struct dmachain));

mem_lock (pid, buff, count);
nc = mmchain (array, buff, count);
for (i = 0; i < nc; i++) {
/*
* Do DMA transfer at physical address
* array[i].address, length array[i].count
*/
array[i].address -= PHYSBASE;
/* convert to physical address */

do_dma (&array[i]); /* user supplied routine */
}
sysfree (array, np * sizeof (struct dmachain));
mem_unlock (pid, buff, count, TRUE);
}

The key functions in the previous code fragment are:

mem_lock
Faults in and locks pages.
mmchain
Converts virtual address range to list of kernel virtual addresses. These are converted to physical addresses using PHYSBASE
mem_unlock
Unlocks memory pages.

Note that whereas UNIX uses the block interface (strategy entry point) for raw I/O, LynxOS uses the character interface read and write entry points.

Block Input/Output

strategy Entry Point

Both UNIX and LynxOS block drivers have a strategy entry point that is called by the kernel's block buffering I/O subsystem to perform transfers to block devices.

Table B-4:  Strategy Entry Point Comparison  
LynxOS
UNIX
strategy (s, bp)
struct statics *s;
struct buf_entry *bp;
strategy (bp)
struct buf *bp;

As with other entry points, the LynxOS strategy routine is passed the address of the device's statics structure as the first argument.

buf Structure

This data structure defines the buffers that are used to hold the data blocks from a block device. In LynxOS, this structure is of type struct buf_entry. The correspondence between the fields is shown below.

Table B-5:  buf Data Structure Comparison  
LynxOS struct buf_entry
UNIX struct buf
int b_status
struct buf_entry *av_forw
struct buf_entry *av_back
int b_device
char *memblk
long b_number
int b_flags
struct buf *av_forw
struct buf *av_back
o_dev_t b_dev
unsigned b_count
caddr_t b_addr
daddr_t b_blkno
char b_oerror
unsigned int b_resid
clock_t b_start
struct proc *b_proc
struct page *b_pages
long b_bufsize
int (*b_iodone)()
struct vnode *b_vp
int b_error
dev_t b_edev

The symbolic constants used to specify bits in the b_flags field are shown below.

Table B-6:  b_flags Field Comparison  
LynxOS
UNIX
B_BUSY
B_DONE
B_ERROR
B_PHYS
B_READ
B_WANTED
B_ASYNC
B_BUSY
B_DONE
B_ERROR
B_PAGEIO
B_PHYS
B_READ
B_WANTED
B_ASYNC

Block I/O Support Routines

UNIX provides a number of support routines for block device drivers.

biowait
Suspend, waiting for I/O completion
biodone
Wakeup process and release buffer
brelse
Put buffer back on free list

The following code fragment shows how these routines are typically used in the strategy entry point and interrupt handler of a UNIX driver.

UNIX

xx_strategy (bp)
struct buf *bp;
{
/* start transfer on device ... */
....
/* if transfer is asynchronous, return, else wait
for completion */

if (bp->flags & B_ASYNC)
return (0);
biowait (bp);
}

xx_intr ()
{
if (error_condition)
bp->b_error |= B_ERROR;
biodone (bp); /* wake up process */
}

LynxOS does not provide the biowait or biodone routines, but the code to implement the required functionality is straightforward, as shown below.

LynxOS

strategy (s, bp)
struct statics *s;
struct buf_entry *bp;
{
/* start transfer on device ... */

swait (&s->devsem); /* wait for device completion */
bp->b_status |= B_DONE;
/* set bits to indicate transfer status */
if (s->error)
bp->b_status |= B_ERROR;
if (bp->b_status & B_ASYNC) {
/* if async transfer, release buffer ... */
ssignal (&bp->b_rwsem);
brelse (bp);
} else
ssignal (&bp->b_rwsem);
/* ... else wakeup waiting task */
}

Driver Debugging

UNIX

printf
Print message on system console (uses polling)
uprintf
Print message on user terminal (uses driver)

LynxOS

kprintf
Print message on debug console (uses polling)
cprintf
Print message on system console (uses driver)

Initialization Routine

Although both UNIX and LynxOS drivers have an initialization routine, the way in which they are used differs in some important ways. By convention, the UNIX routine is called xxxinit, in LynxOS xxxinstall.

UNIX

LynxOS

Probing for Devices

One of the tasks usually performed by the initialization routine is to test for the presence of a device. UNIX drivers must handle bus errors. In LynxOS this is handled automatically. Typical UNIX and LynxOS code is illustrated below:

UNIX init Routine

#define MAX_CONT 4 /* no. of supported controllers */
struct csb csb[MAX_CONT]; /* controller status blocks */

xx_init ()
{
for (i = 0; i < xx_ccnt; i++) {
if (setjmp (u.u_tsav) == 0) {
u.u_nofault = TRUE;
...... /* touch the device here */
u.u_nofault = FALSE;
...... /* Initialize hardware and software */
} else
xx_addr[i] = 0; /* device not present */
}
}

LynxOS install Routine

xx_install (info)
struct xxinfo *info;
/* user defined configuration parameters */
{
.... /* Touch device here */
/* If we get here, we know device is present */
s = (struct statics *) sysbrk (sizeof (struct
statics s));
.... /* Initialize software and hardware */
return (s);
}

Interrupt Handling

In SystemV, the details of a device's interrupt capabilities are defined statically in configuration files external to the driver. The name of the interrupt handler is xxx_intr, where xxx_ is the specified driver prefix.

Because LynxOS supports dynamic driver installation and deinstallation, attaching and detaching an interrupt handler is done within the driver code using the functions iointset() and iointcl(). This is done in the install() and uninstall() entry points. The device's interrupt vector is normally passed to the install routine in the device information structure.

For x86

iointset (32 + info->vector, intr_handler, s);

U Structure

Unlike most UNIX kernels, LynxOS does not have a U structure. The following paragraphs discuss the most commonly used members of this structure and how the equivalent functionality is implemented in a LynxOS driver.

u_base, u_count, u_offset

Older versions of UNIX used these fields to specify the details of a data transfer in the read/write entry points. The driver modifies these during the course of the transfer. The return value received by the application is the initial u_count value minus its final value. More recent implementations of UNIX have replaced them with a uio structure.

In a LynxOS driver, the user buffer address and size are passed directly as arguments to the driver entry point. An important difference from UNIX is that the value returned to the application is the value returned by the driver entry point. The seek position on the device is specified by the field position in the file structure. The driver is responsible for setting this at the end of a transfer.

u_fmode

This field holds the file mode flags. Its main use is in the read/write entry points to test for non-blocking I/O. It is also used to test, for example, if an application is trying to read from a device opened in write only mode.

In LynxOS, the file mode is held in the access_mode field of the file structure.

u_error

This field contains an error code, which is copied to the application's errno variable.

A LynxOS driver specifies an error code with the pseterr() function.

u_segflg

This field indicates whether a data transfer is to or from user or kernel space. It is necessary to know this because the user process and kernel have separate virtual address spaces.

In LynxOS, the user process and kernel exist within the same virtual space, so this functionality is not required.

u_procp

This field is a pointer used to process table entry for the current process. UNIX device drivers seldom need to access this field explicitly. In LynxOS, each process is identified by a unique job number which can be accessed in the driver top-half routines to provide similar functionality. The function getpid() can also be used to find the process ID number.

u_tsav, u_nofault

These are used for trapping bus errors, typically in the init() routine.

In the install routine of a LynxOS driver, bus errors are handled automatically. Elsewhere in a driver, the routines noreco() and recoset() must be used to catch bus errors.

Reentrance and Synchronization

Critical Code Regions

Accesses to shared data structures and hardware registers must be serialized. The synchronization mechanisms used in a UNIX driver depend very much on whether the driver is preemptive. SVR4 driver code is not preemptive, though synchronous preemption is possible if a driver calls sleep(). Drivers written for such kernels only need to synchronize with the interrupt level routines. This is done with the spln and splx functions. The LynxOS equivalent of these functions are disable() and restore() although there is an important difference. The LynxOS functions disable and restore all interrupts, but interrupt nesting is not possible.

Drivers under LynxOS are fully preemptive. Appropriate synchronization must be added to make the driver reentrant.

Event Synchronization

This type of synchronization involves waiting for an event (buffer free, transfer complete, data ready, and so on) to occur.

LynxOS swait/ssignal
UNIX sleep/wakeup

The UNIX sleep() function specifies a priority, which is assigned to the process when it wakes up. LynxOS uses fixed scheduling priorities. A task priority can only be changed on request from the user application. Both UNIX sleep and LynxOS swait() use an argument to specify how signals are handled during the time the task is blocked. It is difficult to find an exact correspondence in behavior in all cases.

UNIX sleep Priority

<= PZERO
Signals are ignored. Use the symbolic constant SEM_SIGIGNORE with swait.
> PZERO
Signals are delivered but sleep never returns. The nearest equivalent with swait is to use the symbolic constant SEM_SIGRETRY. However, the swait is automatically restarted and eventually returns.
> PZERO | PCATCH
sleep is aborted and returns 1 on receipt of a signal. The LynxOS equivalent is to use the symbolic constant SEM_SIGABORT. However, swait returns a non-zero value (not necessarily 1).

Another important difference is that wakeup() is stateless. It can only wake tasks that are blocked on the event at the time that wakeup() is called. On the other hand, ssignal() has a counter associated with it. This difference can have an influence on driver design. More care is needed with synchronization in the stateless case. Though this problem is normally solved by the fact that a UNIX driver is not preemptive.

Driver Interface with User Applications

The driver interface with the application covers the following topics:

Entry Points

There are a number of general remarks that can be made that apply to all entry points.

int flag = f->access_mode;
int dev = f->dev;

Major and Minor Device Numbers

As discussed above, there is an important difference in the way UNIX and LynxOS use the device numbers. Typically, a UNIX driver uses (part of) the minor number to index into an array containing state variables for each device, as illustrated below.

UNIX

/* number of supported controllers */
#define MAX_CONT 4

/* controller status blocks */
struct csb csb[MAX_CONT];

/* number of configured controllers */
extern int xxx__ccnt;

xxx_open (dev, mode, otyp, cred)
dev_t *dev;
int mode;
int otyp;
cred_t *cred;
{
struct csb *csbp;
int cntlr;

cntlr = getminor (*dev) & 0xf;
if (cntlr >= xxx__ccnt || cntlr >= MAX_CONT)
return (ENXIO);
csbp = &csb[cntlr];

This code is unnecessary in the LynxOS driver because the address the controller's status block is passed as an argument to the entry points.

LynxOS

xxx_open (s, dev, f)
struct statics *s;
int dev;
struct file *f;

Note: UNIX drivers usually use the term controller status block and use a statics structure. They are more or less the same thing.

open/close

Table B-7:  open/close Comparison  
UNIX
LynxOS
open (dev,mode,otyp,cred)
dev_t *dev;  /* SVR4 */
int mode;
int otyp;
cred_t *cred;
close (dev,mode,otyp,cred)
dev_t dev;
int mode;
int otyp;
cred_t *cred;
open (s, dev, f)
struct statics *s;
int dev;
struct file *f;
close (s, f)
struct statics *s;
struct file *f;

As shown in the listing above:

read/write

Table B-8:  read/write Comparison  
LynxOS
UNIX
read (s, f, buff, count)
struct statics *s;
struct file *f;
char *buff;
int count;
read (dev, uiop, credp)
dev_t dev;
uio_t *uiop;
cred_t *credp;

The UNIX uio structure specifies a list of user buffers. Earlier UNIX kernels used the clist data structure for character storage.

In a LynxOS driver, the user buffer is specified by buff and count. The entry point is called once for each buffer in scatter/gather I/O (readv/writev). LynxOS does not use the clist data structure.

The following code fragments compare typical write entry point logic used to transmit all user data. Note that in LynxOS, the driver is responsible for positioning the seek pointer (f->position). Another important difference is that the UNIX driver returns the number of bytes not transmitted.

Table B-9:  write Comparison  
LynxOS
UNIX
for (i = 0; i < count; i++)
  transmit (buff[i]);
f->position += count;
return (count);
while ((c = uwritec (uio)) >=0)
  transmit (c);
return (0);

ioctl

Table B-10:  ioctl Comparison  
LynxOS
UNIX
ioctl (s, f, cmd, arg)
struct statics *s;
struct file *f;
int cmd;
int arg;
ioctl(dev, cmd, arg, mode,       cred, rval)
dev_t dev;
int cmd;
int arg;
int mode;
cred_t *credp;
int *rval;

If arg is a pointer, the LynxOS driver must check the validity of the address with rbounds() and wbounds().

select

Table B-11:  select Comparison  
LynxOS
UNIX
struct statics {
...
int space_free;
/* for output */
int data_ready;
/* on input */
int *rsel_sem, *wsel_sem;
};
select (s, f, which, ffs)
struct statics *s;
struct file *f;
int which;
struct sel *ffs;
{
switch (which) {
case SREAD:
ffs->iosem = &s->data_ready;
ffs->sel_sem = &s->rsel_sem;
break;
case SWRITE:
ffs->iosem = &s>space_free;
ffs->sel_sem = &s>wsel_sem;
break;
case SEXCEPT:
return (SYSERR);
}
return(OK);
}
s->data_ready = 1;
disable (ps);
if (s->rsel_sem)
ssignal (s->rsel_sem);
restore (ps);
s->space_free = 1;
disable (ps);
if (s->wsel_sem)
ssignal (s->wsel_sem);
restore (ps);
extern int selwait;
struct proc *selr, *selw;
select (dev, rw)
dev_t dev;
int rw;
{
switch (rw) {
case FREAD:
selr = u.u_procp;
break;
case FWRITE:
selw = u.u_procp;
break;
}
return(0);
}
/*Data Input */
if (selr) {
selwakeup (selr, coll);
selr = 0;
}
/* Data Output */
if (selw) {
selwakeup (selw, coll);
selw = 0
}

Accessing User Space

UNIX

The currently executing user process and the kernel may have separate virtual address spaces. In this case, kernel service routines are used to transfer data to and from user space. These routines usually handle invalid user addresses.

LynxOS

The current user process and the kernel exist in same virtual space. The kernel (including drivers) can access the whole of the virtual space. Therefore, drivers can transfer data to and from user space directly using a pointer.

The following code fragments illustrate how data might be transferred from user space in an ioctl entry point.

Table B-12:  ioctl Data Transfer Example  
LynxOS
UNIX
if (rbounds (useraddr) < nbytes) {
pseterr (EFAULT);
return (SYSERR);
}
while (nbytes--)
*kernaddr++=*useraddr++;
char *useraddr, *kernaddr;
int nbytes;
if (copyin (useraddr, kernaddr, nbytes) == -1)
return (EFAULT);

Returning Errors to User Application

UNIX

Earlier versions used the u_error field in the u structure. SVR4 uses the entry point return value.

LynxOS

Uses the pseterr function and return the value SYSERR.

The following code fragments illustrate how a driver returns the error EIO:

Table B-13:  Returning Errors  
LynxOS
UNIX SVR3
UNIX SVR4
pseterr (EIO);
return (SYSERR);
u.u_error = EIO;
return;
return (EIO);

LynxOS Kernel Threads

When using kernel threads, interrupt processing is performed by a preemptive, prioritized task. This is essential in order to maintain deterministic system response times. Using the UNIX interrupt architecture, where all interrupt processing is done in the interrupt handler itself, will lead to a degradation of the system's real-time performance.

Dynamic Installation

LynxOS supports the dynamic installation and deinstallation of drivers. This greatly facilitates the driver development and debugging phases as a kernel rebuild and reboot is not necessary each time the driver is modified. If the port has been done correctly, the only addition required to support dynamic installation is the declaration of the entry_points structure.

POSIX Programming Model

The LynxOS implementation of the POSIX.1 and POSIX.1b features permit much simpler driver design for supporting asynchronous I/O, non-blocking I/O, and synchronous I/O multiplexing and polling.

Asynchronous I/O

The complexity of handling asynchronous transfers is hidden from the application and driver developer. The POSIX API provides services to the application developer, and the driver sees only synchronous requests. Therefore, code to handle asynchronous transfers can be removed from a UNIX driver if the LynxOS version is only intended for use with POSIX conforming applications.

Synchronous I/O Multiplexing and Polling

This functionality is provided by the select system call at the application level and the select entry point in a driver. The POSIX standard does not define a select function. So, if the LynxOS driver is only intended for use with POSIX conforming applications, the select entry point can be removed.



LynuxWorks, Inc.
855 Branham Lane East
San Jose, CA 95138
http://www.lynuxworks.com
1.800.255.5969
TOC PREV NEXT INDEX