TOC PREV NEXT INDEX

LynxOS Networking Guide


SCMP

SCMP (Snugly Coupled Multiprocessing) allows multiple CPU boards connected to the same backplane (VME) to communicate with each other using the socket interface and TCP/IP protocols.

This chapter describes how to configure an SCMP environment for use with LynxOS.

Overview

SCMP splits complex applications across multiple processors resulting in faster overall computational throughput and real-time response. Some of the features of SCMP support include:

SCMP Concepts

SCMP functions at the data link layer in a generic networking model. Communication above the SCMP layer relies on TCP/IP protocols. To an application, SCMP connections looks like a Ethernet or SLIP connections.

The following figure shows a generic network layer model with SCMP.

Generic Network Model based on OSI/ISO

The SCMP configuration consists of a primary processor and one or more secondary processor(s). The processor boards in the SCMP configuration can have local disks or be diskless clients.

Note: The description of SCMP configuration in this chapter assumes that each processor on the shared bus has a local backplane disk with LynxOS installed. To configure diskless clients for SCMP, see "Starting SCMP".

The backplane network driver is responsible for transferring packets over the backplane. This driver provides the same interface as an ethernet driver to the TCP/IP protocol code and is implemented as a kernel thread. In this way, a cluster of processors running LynxOS can work together while sharing the common backplane. They can be configured to work on different applications or a complex application that can be split across processors. Such applications can communicate over the backplane using sockets.

The following figure shows a hypothetical configuration of VME processors that communicate via SCMP over a shared VME backplane.

Hypothetical SCMP Configuration

The bus backplane is considered to be an Internet network with its own network ID and each board on the backplane is assigned a unique IP address within that network.

Configuration

When configuring an SCMP environment, it involves installing the hardware properly in the VME cage, configuring LynxOS with the appropriate information about the hardware, and initializing SCMP on the LynxOS machine.

The following sections describe these steps:

SCMP - VME

Bus Example Used in This Chapter

For simplicity, this chapter uses an example SCMP setup (shown in the following example) to describe how to configure an SCMP environment.

Example SCMP Configuration

Setting Up the Hardware for SCMP

LynxOS SCMP supports up to eight VME boards sharing a single VMEbus. Boards of one architecture type can be included (PowerPC-only). LynxOS currently supports the following CPU boards and VME bridge chips for SCMP:

VMEbus Driver
Board
VME Bridge Chip
LynxOS VME Driver
MVME 5100
Universe
Uvme
MVME5101

Documentation should be provided for the particular processor boards and the VME cage available during the initial setup of the hardware for SCMP. Before installing SCMP, review VMEbug commands and concepts.

Note: In this chapter, the term VMEbug is used to generically refer to the Bug Monitor of the PowerPC system.

Assembling the System

Select one processor board to be the system controller on the VMEbus. To do this, set the appropriate jumpers on the board (as described in the hardware manual) and place the board in slot 0 of the VME cage. Some VME chassis may require the boards to be in adjacent slots.

From the perspective of LynxOS, there is one primary processor and one or more secondary processor(s). The primary processor may or may not be the same board as the system controller.

Assigning the Board Addresses

The processor boards communicate by accessing each other's DRAM in the VMEbus A32/D32 address space. They also communicate by using the Universe SW-INT interrupt. To configure an SCMP environment, assign a unique address to each board so this communication can occur.

VME Shared Memory Basics

Each board connected to the VMEbus is assigned a unique 32bit address range for its DRAM. Using the MVME 5100 as an example, the following figure illustrates how shared memory is accessed through VME space. This example assumes there are two boards (board 1 and board 2) in the SCMP setup, both making the top 0x03FFF000 bytes of their DRAM available to the VME space.

Note that for each processor board, VME space address 0x0 is accessible at processor address 0xD0000000. Therefore in Figure  85 , where board 2's DRAM is mapped to VME address 0x04000000, it is available to board 1 at 0xD0000000+0x04000000. Similarly board 1's DRAM is mapped to VME address 0x0 and available to board 2 at address 0xD0000000+0x0.

The following figure illustrates how board 1's DRAM is mapped to VME space and where it is visible in the board 2's address space.

Board 1 to Board 2 Shared Memory Example

The following figure illustrates how board 2's DRAM is mapped to VME space and where it is visible in the board 1's address space.

Board 2 to Board 1 Shared Memory Example

For each processor in the SCMP environment, select and program the following addresses into the VME device. In case of Universe chip, the VME device installed on MVME5100 and MVME 5101 boards, this requires programming the following:

Selecting the Slave Addresses

Each board also must have a unique slave address. The VME Slave Image Base Address mentioned above is the starting address in VME space where a processor board's DRAM will be mapped. The VME Slave Image Bound address determines the amount of local DRAM mapped. Once DRAM of a board is mapped to the VME space at any address, Y for example, it becomes visible to all other boards in the system at physical address 0xD0000000+Y. Note that this is valid only after the system boots with LynxOS.

Programming the VME Bridge Chip

Use the addresses in the following table to program the VME bridge chip by entering a firmware-specific command at the VMEbug monitor that modifies the environment variables. Make sure that there is no overlap in the address ranges for each board.

Example DRAM Start and End Address
System
Amount of DRAM mapped
Starting DRAM (Slave) Address
Ending DRAM (Slave) Address
CPU1
64 MB
0x00000000
0x03FFF000
CPU2
64 MB
0x04000000
0x07FFF000

Programming the Universe Chip

The Universe chip must be programmed on each board with the DRAM slave addresses. For primary processor and secondary processors that are booted from the disk, firmware values for VME slave images will be overwritten by the Universe driver. Refer to /sys/dheaders/uvmeinfo.h for default values. For secondary boards that are booted over the backplane using the slaveboot utility, these values need to be programmed with VMEBug firmware (PPCBUG in case of MVME5100/MVME5101 boards). Below is an example of how the universe registers can be setup in the secondary processor to enable it for a slaveboot.

Example Settings

Set the VMEbus Slave Image 0 Control value to 0xC0F20000 for all MVME5100 boards in the configuration.

It is important to note that the MVME5100 and MVME5101 boards start in CHRP address mapping mode and when LynxOS boots up, it transitions the board to PReP mode. Therefore when programming the board's VME slave images from PPCBUG users need to do it in accordance with CHRP.

In CHRP mode the VME Slave Image Translation Offset is calculated by subtracting the VME Slave Image Base Address from 0x00000000. For example:

0x00000000 - 0x04000000 = 0xFC000000

In PReP mode, the VME Slave Image Translation Offset is calculated by subtracting the base address from 0x80000000. For example:

0x80000000 - 0x04000000 = 0x7C000000

The following figure uses a Slave Image Base Address Register (DRAM address) of 0x04000000. This makes the DRAM of this board available at VME address 0x04000000

PPC1_Bug> env
[...]
[...]
VMEbus Slave Image 0 Control C0F20000
VMEbus Slave Image 0 Base Address Register 04000000
VMEbus Slave Image 0 Bound Address Register 07FFF000
VMEbus Slave Image 0 Translation Offset FC000000
[...]
[...]
Update Non-Volatile RAM (Y/N)? Y

Programming the Universe Chip on an MVME2600 Board

Verifying the Hardware Setup

Before beginning the software (LynxOS) phase of setting up SCMP, verify that the hardware is properly installed and configured.

A quick way to check the hardware setup is by using the md (memory dump) command at the VMEbug prompt. The following steps show how to test the example setup in this chapter.

Troubleshooting a VME Board that Uses the Universe Chip

If SCMP configuration is not working, verify the hardware in the system is correct. Check the firmware in the system as described below. Assume that there are two boards in the SCMP setup.

  1. On both boards in the SCMP configuration, program the following values for the PCI Slave Image #1 with the env command.

    PCI Slave Image 1 Control Register
    PCI Slave Image 1 Base Address Register
    PCI Slave Image 1 Bound Address Register
    PCI Slave Image 1 Translation Offset Register
    C0820000
    80000000
    90000000
    800000000

The above translation indicates that PCI address range 0x80000000-0x90000000 is mapped to VME address 0x80000000 + 0x80000000 = 0x0. Since the board is in CHRP mode, PCI address 0x80000000 is also CPU/physical address 0x80000000. Therefore any access to Physical address range 0x80000000-0x90000000 will translate to VME address range 0x0-0x10000000.
  1. Program the VME slave images on all boards such that DRAMs of all boards are available at unique VME addresses. Unique VME addresses assigned to top portion (64MB in this example) of DRAM on all boards must lie within the range 0x0-0x10000000. For example on board 2 program the following values into the VME slave image:

    VMEbus Slave Image 0 Control
    VMEbus Slave Image 0 Base Address Register
    VMEbus Slave Image 0 Bound Address Register
    VMEbus Slave Image 0 Translation Offset
    E0F20000
    04000000
    07FFF000
    FC000000

This maps the DRAM of this board to VME address range 0x04000000-0x07FFF000. The first board can now access this board's DRAM at address 0x84000000. Similarly setup board 1's DRAM to be visible at VME address range 0x0000000-0x03FFF000.
  1. Use the PPC1Bug md (memory dump) command to verify that the boards can access each other's memory across the VMEbus.

Can the board 1 see board 2?
[board1]PPC1-Bug> md 80000000 20
[board2]PPC1-Bug> md 0 20
See the PPC6-Bug firmware documentation for more information on the md command.
  1. Now we need to make sure the boards can see each other's Universe chip registers. On board 2, setup the following values in the VMEbus Register Access registers with the PPCBUG env command:

    VMEbus Register Access Image Control Register
    VMEbus Register Access Image Base Address Register
    80F20000
    07FFF000

On board 1, set the base address to 0x03FFF000. Can the boards see each other's universe registers?
[board 1] PPC6-Bug> md 87FFF000 20

This should show the vendor id / device id combination of the Universe chip on board 2. Note that 0x07FFF000 is a VME address and 0x87FFF000 is the CPU address at which this VME address is visible. Similarly, board 1's Universe registers should be visible to board 2 at address 0x83FFF000.

Setting Up LynxOS for SCMP

This is the software stage of the SCMP setup. At this point, these steps must be performed:

This section assumes that LynxOS is already installed on the local disk of each VME board in the SCMP environment. If LynxOS is not installed, do so now.

Note: Any number of nodes in an SCMP environment can be diskless. However, it is easier to configure the system if at least the primary processor has a local disk. If a diskless secondary processor and a disk-based primary processor are used, configure the primary processor as described in this chapter. Then refer to "Network Booting Diskless Clients with LynxOS".

Copying the Current LynxOS Kernel

Because the SCMP setup scripts change the LynxOS kernel, it is a good idea to make a copy of the current running kernel before proceeding with the scripts.

# cp /lynx.os /lynx.os.pre-scmp

If there are problems, this copy of the kernel can be booted instead. See the preboot man page for details on booting alternate kernels.

Choosing the Primary Processor

Select the board to be the primary processor in the SCMP environment. All processor boards contains a database of information about all processors in the SCMP system. This database is contained in the /etc/bplane.conf file. The first entry in this database belongs to the primary processor. The following information is required for each processor in the system:

Set the VRAI base address in the A32 address space.

The following table provides an example of the kind of information that is required to supply for each processor:

Information Required for Each Processor
Hostname
DRAM Start Address
Combined IO Addresses
scmp-cpu1
0x00000000
0xD3FFF000
scmp-cpu2
0x04000000
0xD7FFF000

Configuring the LynxOS Backplane Driver

The LynxOS backplane device driver must be configured for each of the processors in the SCMP environment. Complete the following steps for each processor in the system, starting with the primary processor.

Creating the Backplane Configuration File

First, create the file /etc/bplane.conf. This configuration file specifies the board ID, DRAM address, I/O address, and hostname for each board in the SCMP system. The following figure shows an example /etc/bplane.conf file.

# Type Boot-Flag DRAM-Address IO-Address Hostname
   9    0        0x00000000    0xD3FFF000 192.1.1.1
   9    0        0x04000000    0xD7FFF000 192.1.1.2

Example bplane.conf File

The /etc/bplane.conf fields are as follows:

For more information on bplane.conf file, see its man page.

Modifying the Backplane Driver Information

  1. Move to the /sys/devices directory:

# cd /sys/devices
  1. Edit the if_bplaneinfo.c file.

The parameter MAX_NO_PROCESSORS must be equal to or greater than the total number of boards on the system (eight maximum). For example:
MAX_NO_PROCESSORS = 3

Rebuilding the Backplane Driver Information

Rebuild the backplane driver information file by typing make in the /sys/devices directory:

# make

Rebuilding the LynxOS Kernel

Now that the configuration files and backplane driver files are modified, rebuild the kernel to incorporate the changes:

# cd /sys/lynx.os
# make install.tcpip

Choose the SCMP option and answer the questions presented by the script.

This step allows the user to enable the bplane and VME chip drivers.

The VME backplane is considered to be an Internet network with its own Internet network ID. Choose a unique network address for the backplane network. Similarly, enter the corresponding Internet host IDs and hostnames for the processor boards on the backplane. These values should correspond to the values specified in /etc/bplane.conf.

Starting SCMP

After rebuilding the kernel, reboot the primary processor. This brings up the SCMP-configured kernel on the primary processor. The -N option remakes the /dev directory with the device bplane added.

# reboot -aN

Note: The primary processor must be running SCMP before the secondary processor(s) can run SCMP. To boot a client processor, please refer to "Network Booting Diskless Clients with LynxOS".

If the primary processor is disk-based and is exporting file system for the secondary processors, follow steps 1 through 3 below:

  1. Enable TFTP Services:

- Uncomment the line starting with tftp in the file /etc/inetd.conf.
- Kill the process /net/inetd:
the command ps -ax | grep inetd gives the process id of the inetd process. Use this process id to kill the process and restart it.
# ps -ax | grep inetd
# kill -1 <process_id>
  1. Export root directory for the diskless client via NFS:

For example, if /clients/cpu2 is the directory containing, or linked to the root file system meant for the diskless client, this directory must be exported.
A) Add the following line to the file /etc/exports:
/clients/cpu2 root=<client_processor>
where <client_processor> is the diskless client processor importing the root file system from the server.
B) Export the directory to the client:
# exportfs -av
For more information, see "Ethernet Netboot".
  1. Build diskless client configuration files, the SCMP-bootable image and the necessary start-up files:

The SCMP netbootable image is built in the directory $ENV_PREFIX/sys/romkit/scmp-boot.

Note: ENV_PREFIX is / for native development systems.

The template file spec.tmpl is used to create the spec file for mkimage: ppc-x.netboot.spec, where x is the CPUID, which is 1 by default. This spec file can be edited to add/delete any item(s) from the RAM disk.
C) Configure the diskless client:
# cd ENV_PREFIX/sys/romkit/scmp-boot
# make config
This asks a few questions in order to configure the diskless client's image. It creates the directory sys-1 where the kernel for the diskless client is copied.
D) Build the SCMP netbootable image for the diskless client:
# make all
E) Install the image in the download directory (/tftboot by default):
# make install
For configuring and building images for multiple clients, one needs to specify the variable CPUID with the make command:
# make CPUID=<x> config
# make CPUID=<x> all
# make CPUID=<x> install
where <x> is the client id. When CPUID is not specified, it defaults to one.

To manually boot the client processor, type the following at the firmware prompt of the client processor:

Motorola PowerPC client:
PPC1-Bugs go 4020


LynuxWorks, Inc.
855 Branham Lane East
San Jose, CA 95138
http://www.lynuxworks.com
1.800.255.5969
TOC PREV NEXT INDEX