![]() |
|
||||
Writing Device Drivers for LynxOS |
Device Resource Manager (DRM)
The Device Resource Manager (DRM) is a LynxOS module that functions as an intermediary between the operating system, device drivers, and physical devices and buses. The DRM provides a standard set of service routines that device drivers can use to access devices or buses without having to know device- or bus-specific configuration options. DRM services include device identification, interrupt resource management, device I/O to drivers, and device address space management. The DRM also supports dynamic insertion and deletion of devices.
This chapter introduces DRM concepts and explains DRM components. Sample code is provided for DRM interfaces and services. The PCI bus layer is described in detail with a sample driver and application. This chapter provides information on the following topics:
- DRM Concepts
- DRM Service Routines
- Using DRM Facilities from Device Drivers
- Using DRM Facilities from Applications
- Advanced Topics
- PCI Bus Layer
- Example Driver
- Sample Application
DRM Concepts
Device Tree
The Device Tree is a hierarchical representation of the physical device layout of the hardware. DRM builds a device tree during kernel initialization. The device tree is made up of nodes representing the I/O controllers, host bridges, bus controllers, and bridges. The root node of this device tree represents the system controller (CPU). There are two types of nodes in the device tree: DRM bus nodes and DRM device nodes.
DRM bus nodes represent physical buses available on the system, while DRM device nodes represent physical devices attached to the bus.
The DRM nodes are linked together to form parent, child, and sibling relationships. A typical device tree is shown in the figure below. To support Hot Swap environments, DRM nodes are inserted and removed from the device tree, mimicking Hot Swap insertion and extraction of system devices.
Device Tree![]()
DRM Components
A module view of DRM and related components is shown in the following figure. A brief description of each module is given below the figure.
Module View![]()
- DRM - DRM provides device drivers with a generalized device management interface.
- KERNEL - The LynxOS kernel provides service to applications and device drivers. DRM uses many of the kernel service routines.
- BUS LAYER - These modules perform bus-specific operations. DRM uses the service routines of the bus layer to provide service to the device drivers.
- DEVICE DRIVER - These modules provide a generic application programming interface to specific devices.
- BSP - The Board Support Package (BSP) provides a programming interface to the specific hardware architecture hosting LynxOS. This module also provides device configuration information to other modules.
Bus Layer
DRM uses bus layer modules to support devices connected to many different kinds of buses. There are numerous bus architectures, many of which are standardized. Typical bus architectures seen in systems are the ISA, PCI, and VME standards, however, proprietary bus architectures also exist. DRM needs a specific bus layer module to support a specific kind of bus architecture. The device drivers use DRM service routines to interface to the bus layers. The bus layers interface with the BSP to get board-specific information.
The bus layers provide the following service routines to DRM:
- Find bus nodes and device nodes
- Initialize bus and device nodes
- Allocate resources for bus and device nodes
- Free resources from bus and device nodes
- Map and unmap a device resource
- Perform device I/O
- Insert a bus or device node
- Remove a bus or device node
LynxOS supports only one bus layer, which is used for managing PCI and CompactPCI devices. Some of the DRM functions described later in this chapter require the bus layer ID. The correct symbol to use is PCI_BUSLAYER.
DRM Nodes
A DRM node is a software representation of the physical device. Each node contains fields that provide identification, device state, interrupt routing, bus-specific properties, and links to traverse the device tree. DRM service routines are used to access the DRM node fields. These routines provide device drivers access to DRM facilities via a standard interface. This eliminates the need to know implementation details of the specific software structure. Some of the important fields of the DRM node are shown in the next table.
DRM Node States
The status of a DRM node is indicated by its state. Initially, a DRM node is set to IDLE when it is created. Devices that are removed from the DRM tree, or undetected devices are considered UNKNOWN. The UNKNOWN state is not used by DRM, but the state is used to denote a device that is unrecognized to DRM. The following diagram details the stages of DRM node states.
![]()
DRM Node States
DRM Initialization
The DRM module is initialized during LynxOS kernel initialization. DRM builds a device tree of all visible devices and brings them up to a READY state, if possible. This is to enable all statically linked drivers to claim the DRM nodes and bring up the basic system service routines. Some DRM nodes may be left in the SELECTED state after kernel initialization is complete. Typically, this can be the result of unavailable resources.
LynxOS provides the ability to control PCI resource allocation. PCI resources can be allocated either by the BIOS or by DRM. By default, LynxOS x86 distributions use the BIOS to handle resource allocation. For other platforms, DRM handles the resource allocation. Because DRM uses the same set of interfaces, whether or not it handles resource allocation, device drivers do not need to change.
For more information on PCI resource allocation and DRM, see the chapter "PCI Resource Allocator for LynxOS" in the LynxOS User's Guide.
DRM Service Routines
DRM service routines are used by device drivers to identify, setup and manage device resources. Typically, they are used in the install() and uninstall() entry points of the device driver. Device drivers locate the device they need to service and obtain an identifying handle. This handle is used in subsequent DRM calls to reference the device. The table below gives a brief description of each service routine and typical usage. See the DRM man pages for more details. Additionally, see "Example Driver".
Interface Specification
Device drivers call DRM service routines like any standard kernel service routine. The following table provides a synopsis of the service routines and their interface specification. Refer to LynxOS man pages for a complete description.
DRM Service Routine Interface Specification Name Synopsis
Using DRM Facilities from Device Drivers
Device Identification
In the install() device driver entry point a driver attempts to connect to the device it intends to use. To locate its device, the driver needs to use the drm_get_handle() service routine. drm_get_handle() returns a pointer to the DRM node handle via its handle argument. The driver specifies the device it is interested in by using drm_get_handle() in the following manner:
It is possible to supply a wild card to drm_get_handle() using
vendor_id = -1 and device_id = -1 as parameters. This claims and returns the first READY device in an unspecified search order. The driver examines the properties of the device to perform a selection. The driver needs to subsequently release the unused devices.It is also possible to navigate the device tree using traversal functions and to obtain handles for the nodes. Device selection is performed by other modules, drivers or system management applications. If device selection has been done by some other means, the driver claims the device by using the drm_claim_handle() service routine, taking the node handle as a parameter.
The drm_free_handle() service routine is used to release the handle. The release of the device is typically done in the uninstall() routine of the driver. The drm_free_handle() takes the node handle to be freed as a parameter.
In Hot Swap environments, system management service routines select, make devices ready, and provide node handles for drivers to claim and use. The system management service routines facilitate the selection and dynamic loading of needed drivers and provides them with node handles for use.
Device Interrupt Management
DRM maintains all interrupt routing data for a device node. Drivers use the drm_register_isr() service routine to register an interrupt service routine and the drm_unregister_isr() service routine to clear a registration. Typically, this service routine is used in the install() and uninstall() entry points of the driver. To support sharing of interrupts in a hot swap/high availability environment, DRM internally dispatches all ISRs sharing an interrupt. The returned link_id is NULL, and the iointlink() kernel service routine does not perform any dispatches.
The following code segments illustrate the use of these DRM service routines:
The interrupt management service routines return a status message when applied to a polled mode device.
Device Address Space Management
Many devices have internal resources that need to be mapped into the processor address space. The bus layers define such device-specific resources. For example, the configuration registers, the bus number, device number, and the function number of PCI devices are considered resources. The bus layer defines resource IDs to identify device-specific resources. Some of the device resources may need to be allocated. For example, the base address registers of a PCI device space need to be assigned a unique bus address space. DRM provides service routines to map and unmap a device resource into the processor address space. The function drm_map_resource() takes as parameters the device handle, resource ID and a pointer to store the returned virtual address. The drm_unmap_resource() takes as parameters a device handle and resource ID.
The following code fragment illustrates the use of these service routines:
Device I/O
DRM provides service routines to perform read and write to bus layer-defined resources. The drm_device_read() service routine allows the driver to read a device-specific resource. The drm_device_write() service routine allows the driver to perform a write operation to a device-specific resource. The resource IDs are usually specified in a bus layer-specific header file. For example, the file machine/pci_resource.h defines the PCIBUSLAYER resources. Both these service routines use the handle, resource ID, offset, size and a buffer as parameters. The meaning of the offset and size parameters is defined by the bus layer. Drivers implement platform-independent methods of accessing device resources by using these service routines. The following code fragment illustrates the use of these service routines.
This code is platform independent. The service routines take care of endian conversion, serialization, and other platform-specific operations.
DRM Tree Traversal
DRM provides a set of functions to navigate the device tree. Most of these functions take a reference node as input and provide a target node as output. The functions are listed below:
drm_getroot(&handle) returns the root of the device tree in handle.
drm_getparent(node,&handle) returns the parent of node in handle.
drm_getchild(node,&handle) returns the child of node in handle.
drm_getsibling(node,&handle) returns the sibling of node in handle.
Device Insertion/Removal
DRM provides two service routines that add nodes to the DRM tree: drm_locate() recursively finds and creates DRM nodes given a parent node as reference; drm_insertnode() inserts one node. The drm_insertnode() service routine is used when sufficient data is known about the device being inserted. The drm_locate() service routine is used to build entire subtrees.
A typical application involves inserting the bridge device corresponding to a slot, using the drm_insertnode() service routine. For a given configuration, the geographic data associated with the slots is generally known. This data is used to insert the bridge device. The data that is needed to insert a node is bus layer specific. For the PCIBUSLAYER, the PCI device number and function number are provided. The reference parent node determines the bus number of the node being inserted. Also, the bus layer determines the location of the inserted node in the DRM tree. Once the bridge is inserted, the drm_locate() service routine is used to recursively build the subtree below the bridge.
The drm_locate() and drm_insertnode() service routines initialize the DRM nodes to the IDLE state. The drm_selectnode() or drm_select_subtree() service routines are used to select the desired nodes and sets the nodes to the SELECTED state. The drm_alloc_resource() service routines are used to set the nodes to a READY state. DRM nodes in the READY state are available to be claimed by device drivers. After being claimed, the node is set to the ACTIVE state.
During extraction, device drivers release the DRM node using the drm_free_handle() service routine. This brings the DRM node back to a READY state. Resources associated with the nodes are released by using the drm_free_resource() service routine. This sets the nodes to the SELECTED state. The DRM nodes are then put in an IDLE state by using the drm_unselect_subtree() or drm_unselect_node() service routines. The IDLE nodes are removed by using the drm_delete_subtree(), or drm_delete_node() service routines. This last operation puts the device back into an unknown state. The device is now extracted from the system. A convenience function, drm_prune_subtree(), removes DRM's knowledge of an entire subtree. This routine operates on subtrees that are in the READY state.
When DRM nodes are inserted, they are time-stamped to assist in locating recently inserted nodes. Most of the DRM facilities are accessed by user mode programs using the sysctl() interface.
Using DRM Facilities from Applications
User mode applications use the sysctl() interface to get access to DRM facilities. It is possible to traverse the DRM tree, get node data, and to perform insertions and deletions using the sysctl() interface. The sys/drm_sysctl.h header file defines the MIB names and sysctl() data structures that are used. See the sysctl() man page for details on how to use the system call. The sysctl() call is invoked as:
sysctl() parameters are described in the table below.
The top-level MIB name to access DRM-related information is CTL_HW. The second-level MIB name to access DRM information is HW_DRM. The third-level names provide DRM-specific facilities as described in the following table.
Hot Swap Management Applications
There are special facilities available for Hot Swap management applications. These facilities are specified as commands to the DRM_CMD third-level facility. The table below lists the commands that are available at this level.
The following code fragments illustrate how this interface is used.
A sample of the CMD_INSERT code is shown below. The prop structure is filled in with bus layer-specific data that provides information on the node being inserted. The ref_node is the parent node of the node to be inserted.
Example Driver
/* This is a sample driver for a hypothetical PCI device. This PCI device has a vendor_id of ABC_VENDORID and a device_id ABC_DEVICEID. This device has one base address register implemented as a PCI Memory BAR and needs 4K of space. The device registers are implemented in this space. The device needs a interrupt service routine to handle events raised by the device. It may be possible that there are multiple of these devices in the system. */
/* Find the device ABC_VENDORID, ABC_DEVICEID. Every call to abc_install() by the OS, installs a ABC device. The number of times abc_install() is called depends on how many static devices for ABC have been configured via the standard LynxOS device configuration facilities. This entry point is also called during a dynamic device install. */
/* A Hot Swap capable driver may replace the next call with drm_claim_handle() and pass the handle given by the system management layer, instead of finding the device by itself */
/* drm_get_handle or drm_claim_handle failed to find a
device. return failure to the OS saying install failed. */
Sample Application
![]() LynuxWorks, Inc. 855 Branham Lane East San Jose, CA 95138 http://www.lynuxworks.com 1.800.255.5969 |
![]() |
![]() |
![]() |
![]() |