Software Product Description ___________________________________________________________________ PRODUCT NAME: Compaq OpenVMS Cluster Software SPD 29.78.19 This Software Product Description describes Versions 6.2-1H3, 7.1-1H1, 7.1-1H2, 7.1-2, 7.2, 7.2-1, V7.2-1H1, and V7.3 of the following prod- ucts: o Compaq VMScluster Software for OpenVMS Alpha o Compaq VAXcluster Software for OpenVMS VAX o Compaq OpenVMS Cluster Client Software for Alpha (part of NAS150) o Compaq OpenVMS Cluster Client Software for VAX (part of NAS150) Except where noted, the features described in this SPD apply equally to Alpha and VAX systems. Compaq OpenVMS Cluster Software licenses and part numbers are architecture specific; refer to the Ordering Infor- mation section of this SPD for further details. DESCRIPTION Compaq OpenVMS Cluster Software is an OpenVMS System Integrated Prod- uct (SIP). It provides a highly integrated OpenVMS computing environ- ment distributed over multiple Alpha and VAX systems. In this SPD, this environment is referred to as an OpenVMS Cluster. Systems in an OpenVMS Cluster system can share processing, mass stor- age (including system disks), and other resources under a single OpenVMS security and management domain. Within this highly integrated envi- ronment, systems retain their independence because they use local, memory- resident copies of the OpenVMS operating system. Thus, OpenVMS Clus- ter systems can boot and shut down independently while benefiting from common resources. April 2001 Applications running on one or more systems in an OpenVMS Cluster sys- tem can access shared resources in a coordinated manner. OpenVMS Clus- ter software components synchronize access to shared resources, al- lowing multiple processes on any system in the OpenVMS Cluster to per- form coordinated, shared data updates. Because resources are shared, OpenVMS Cluster systems offer higher avail- ability than standalone systems. Properly configured OpenVMS Cluster systems can withstand the shutdown or failure of various components. For example, if one system in an OpenVMS Cluster is shut down, users can log in to another system to create a new process and continue work- ing. Because mass storage can be shared clusterwide, the new process is able to access the original data. Applications can be designed to survive these events automatically. All OpenVMS Cluster systems have the following software features in common: o The OpenVMS operating system and OpenVMS Cluster software allow all systems to share read and write access to disk files in a fully co- ordinated environment. Application programs can specify the level of clusterwide file sharing that is required; access is then co- ordinated by the OpenVMS extended QIO processor (XQP) and Record Management Services (RMS). Coherency of multiple-system configu- rations is implemented by OpenVMS Cluster software using a flex- ible and sophisticated per-system voting mechanism. o Shared batch and print queues are accessible from any system in the OpenVMS Cluster system. The OpenVMS queue manager controls clus- terwide batch and print queues, which can be accessed by any sys- tem. Batch jobs submitted to clusterwide queues are routed to any available system so the batch load is shared. o The OpenVMS Lock Manager System Services operate in a clusterwide manner. These services allow reliable, coordinated access to any resource, and provide signaling mechanisms at the system and pro- cess level across the whole OpenVMS Cluster system. o All disks and tapes in an OpenVMS Cluster system can be made ac- cessible to all systems. 2 o Process information and control services, including the ability to create and delete processes, are available on a clusterwide basis to application programs and system utilities. (Clusterwide process creation is available with Version 7.1 and higher.) o Configuration command procedures assist in adding and removing sys- tems and in modifying their configuration characteristics. o The dynamic Show Cluster utility displays the status of OpenVMS Clus- ter hardware components and communication links. o A fully automated clusterwide data and application caching feature enhances system performance and reduces I/O activity. o The ability to define logical names that are visible across mul- tiple nodes in an OpenVMS Cluster (Version 7.2 and higher). o An application programming interface (API) allows applications with- inin multiple OpenVMS Cluster nodes to communicate with each other (Version 7.2 and higher). o Standard OpenVMS system management and security features work in a clusterwide manner so that the entire OpenVMS Cluster system op- erates as a single security and management domain. o The OpenVMS Cluster software dynamically balances the interconnect I/O load in OpenVMS Cluster configurations that include multiple interconnects. o Multiple OpenVMS Cluster systems can be configured on a single or extended local area network (LAN). LANs and the LAN adapters used for OpenVMS Cluster communications can be used concurrently by other network protocols. o The optionally installable DECamds availability management tool (as well as Availability Manager) allows system managers to monitor and manage resource availability in real time on all the members of an OpenVMS Cluster. o Cross-architecture satellite booting permits VAX boot nodes to pro- vide boot service to Alpha satellites and allows Alpha boot nodes to provide boot service to VAX satellites. 3 o System services enable applications to automatically detect changes in OpenVMS Cluster membership. Definitions The following terms are used frequently throughout this SPD: o Boot node - A system that is both a MOP server and a disk server. A boot node can fully service satellite boot requests. o System - An Alpha family or VAX family computer running the Open- VMS operating system. A system comprises one or more processors and operates as an OpenVMS Cluster node. An OpenVMS Cluster node can be referred to as an OpenVMS Cluster member. o Disk server - A system that uses the OpenVMS MSCP server to make disks to which it has direct access available to other systems in the OpenVMS Cluster system. o HSC, HSJ - An intelligent mass storage controller subsystem that connects to the CI bus. o HSD - An intelligent mass storage controller subsystem that con- nects to the DSSI bus. o HSG - An intelligent mass storage controller subsystem that con- nects to the Fibre Channel bus. o HSZ - An intelligent mass storage controller subsystem that con- nects to the SCSI bus. o MDR (Compaq Modular Data Router) - Fibre Channel to SCSI bridge al- lowing SCSI tape devices to be used behind a Fibre Channel switch. o Maintenance Operations Protocol (MOP) server - A system that ser- vices satellite boot requests to provide the initial LAN downline load sequence of the OpenVMS operating system and OpenVMS Cluster software. At the end of the initial downline load sequence, the satel- lite uses a disk server to perform the remainder of the OpenVMS boot- ing process. o Mixed-architecture OpenVMS Cluster system - An OpenVMS Cluster sys- tem that is configured with both VAX and Alpha systems. 4 o MSCP (mass storage control protocol) - A message-based protocol for controlling Digital Storage Architecture (DSA) disk storage sub- systems. The protocol is implemented by the OpenVMS DUDRIVER de- vice driver. o Multihost configuration - A configuration in which more than one system is connected to a single CI, DSSI, SCSI, or Fibre Channel interconnect. o Satellite - A system that is booted over a LAN using a MOP server and disk server. o Single-host configuration - A configuration in which a single sys- tem is connected to a CI, DSSI, SCSI, or Fibre Channel intercon- nect. o Star coupler - A common connection point for all CI connected sys- tems and HSC and HSJ controllers. o Tape server - A system that uses the OpenVMS TMSCP server to make tapes to which it has direct access available to other systems in the OpenVMS Cluster system. o TMSCP (tape mass storage control protocol) - A message-based pro- tocol for controlling DSA tape-storage subsystems. The protocol is implemented by the OpenVMS TUDRIVER device driver. o Vote - Systems in an OpenVMS Cluster system can be configured to provide votes that are accumulated across the multi-system envi- ronment. Each system is provided with knowledge of how many votes are necessary to meet a quorum before distributed shared access to resources is enabled. An OpenVMS Cluster system must be configured with at least one voting system. Compaq OpenVMS Cluster Client Software OpenVMS Cluster configurations can be configured with systems that op- erate and are licensed explicitly as client systems. Compaq OpenVMS Cluster Client licensing is provided as part of the Compaq NAS150 lay- ered product. An individually available license for DS-series AlphaServers is also provided. Compaq OpenVMS Cluster Client systems contain full 5 OpenVMS Cluster functionality as described in this SPD, with the fol- lowing exceptions: o Client systems cannot provide votes toward the operation of the Open- VMS Cluster system. o Client systems cannot MSCP serve disks or TMSCP serve tapes. Interconnects OpenVMS Cluster systems are configured by connecting multiple systems with a communications medium, referred to as an interconnect. Open- VMS Cluster systems communicate with each other using the most appro- priate interconnect available. In the event of interconnect failure, OpenVMS Cluster software automatically uses an alternate interconnect whenever possible. OpenVMS Cluster software supports any combination of the following interconnects: o CI (computer interconnect) o DSSI (Digital Storage Systems Interconnect) o SCSI (Small Computer Storage Interconnect) o FDDI (Fiber Distributed Data Interface) o Ethernet (10/100, Gigabit) o Asynchronous transfer mode (ATM) (eMULATED LAN configurations only) o Memory Channel (Version 7.1 and higher only) o Fibre Channel (storage only, Alpha only, Version 7.2-1 and higher only) CI and DSSI are highly optimized, special-purpose interconnects for systems and storage subsystems in OpenVMS Cluster configurations. CI and DSSI provide both system-to-storage communication and system-to- system communication. 6 SCSI is an industry-standard storage interconnect. Multiple systems can be configured on a single SCSI bus, thereby providing multihost access to SCSI storage devices. Note that the SCSI bus is not used for system-to-system communication. Consequently, systems connected to a multihost SCSI bus must also be configured with another interconnect to provide system-to-system communication. Fibre Channel is an evolving industry standard interconnect for stor- age and communications. Support by OpenVMS V7.2-1 (and higher) allows for a storage-only interconnect in a multihost environment utilizing Fibre Channel switched topologies. With Version 7.3, support for SCSI Tapes utilizing the Modular Data Router bridge is supported. As is true with SCSI, systems connected to a multihost Fibre Channel bus must also be configured with another interconnect to provide system-to-system communication. Ethernet, ATM, and FDDI are industry-standard, general-purpose com- munications interconnects that can be used to implement a local area network (LAN). Except where noted, OpenVMS Cluster support for these LAN types is identical. The ATM device must be used as an emulated LAN configured device. Ethernet and FDDI provide system-to-system commu- nication. Storage can be configured in FDDI environments that support FDDI-based storage servers. OpenVMS Cluster configurations can be configured using wide area net- work (WAN) infrastructures, such as DS3, E3, and ATM. Connection to these media is achieved by the use of WAN interswitch links (ISLs). Memory Channel is a high-performance interconnect that provides system- to-system communication. Memory Channel does not provide direct ac- cess to storage, so a separate storage interconnect is required in Mem- ory Channel configurations. Configuration Rules o The maximum number of systems supported in an OpenVMS Cluster sys- tem is 96. 7 o Every system in an OpenVMS Cluster system must be connected to ev- ery other system via any supported OpenVMS Cluster interconnect (see Table 1). o VAX-11/7xx, VAX 6000, VAX 7000, VAX 8xxx, VAX 9000, and VAX 10000 series systems require a system disk that is accessed via a local adapter or through a local CI or DSSI connection. These systems can- not be configured to boot as satellite nodes. o All systems connected to a common CI, DSSI, or Memory Channel in- terconnect must be configured as OpenVMS Cluster members. OpenVMS Cluster members configured on a CI, DSSI, or Memory Channel will become members of the same OpenVMS Cluster (this is imposed auto- matically by the OpenVMS Cluster software). All systems connected to a multihost SCSI bus must be configured as members of the same OpenVMS Cluster. o An OpenVMS Cluster system can include any number of star couplers. Table 2 shows the number of CI adapters supported by different sys- tems. The number of star couplers that a system can be connected to is limited by the number of adapters with which it is config- ured. o The maximum number of systems that can be connected to a star cou- pler is 16, regardless of star coupler size. o The KFQSA Q-bus to DSSI adapter does not support system-to-system communication across the DSSI; systems using this adapter must in- clude another interconnect for system-to-system communication. o The maximum number of systems that can be connected to a DSSI is four, regardless of system or adapter type. Any mix of systems and adapters is permitted, except where noted in the Hardware Support section of this SPD. Depending on the system model, it may not be possible to configure four systems on a common DSSI bus because of DSSI bus cable-length restrictions. Refer to the specific system system configuration manuals for further information. 8 o The maximum number of systems that can be connected to a SCSI bus is three. If the SCSI bus includes a five-port or greater Fair Ar- bitration SCSI Hub (DWZZH-05), the maximum number of systems is in- creased to four. o The maximum number of multihost SCSI buses that a system can be con- nected to is 26. o The configuration size for Fibre Channel storage increases on a reg- ular basis with new updates to OpenVMS. As such, please refer to the Guidelines for OpenVMS Cluster Configurations manual for the most up-to-date configuration capabilities. o Beginning with OpenVMS Version 7.2-1, Multipath Failover for both Parallel SCSI and Fibre Channel storage environments is supported. This feature allows for the failover of cluster storage communi- cations from one path to another when multiple storage buses have been connected to the same data source. For detailed information, refer to the Guidelines for OpenVMS Cluster Configurations manual. o OpenVMS Cluster systems that are configured using WAN interconnects must adhere to the detailed line specifications described in the Guidelines for OpenVMS Cluster Configurations manual. The maximum system separation is 150 miles. o A single time-zone setting must be used by all systems in an Open- VMS Cluster system. o An OpenVMS Cluster system can be configured with a maximum of one quorum disk. A quorum disk cannot be a member of an OpenVMS vol- ume set or of a shadow set created by the Volume Shadowing for Open- VMS product. o A system disk can contain only a single version of the OpenVMS op- erating system and is architecture specific. For example, OpenVMS Alpha Version 7.1 cannot coexist on a system disk with OpenVMS VAX Version 7.1. 9 o HSJ and HSC series disks and tapes can be dual pathed between con- trollers on the same or different star couplers. The HSD30 series disks and tapes can be dual pathed between controllers on the same or different DSSI interconnects. Such dual pathing provides enhanced data availability using an OpenVMS automatic recovery capability called failover. Failover is the ability to use an alternate hard- ware path from a system to a storage device when a failure occurs on the current path. The failover process is transparent to appli- cations. Dual pathing between an HSJ or HSC and a local adapter is not permitted. When two local adapters are used for dual pathing, each adapter must be located on a separate system of the same ar- chitecture. (Note: When disks and tapes are dual pathed between con- trollers that are connected to different star couplers or DSSI buses, any system connected to one of the star couplers or buses must also be connected to the other.) o Disks can be dual pathed between pairs of HSZ controllers that are arranged in a dual-redundant configuration. The controllers must be connected to the same host SCSI bus. Failover is accomplished using the HSZ transparent failover capability. o OpenVMS operating system and layered-product installations and up- grades cannot be performed across architectures. OpenVMS Alpha soft- ware installations and upgrades must be performed using an Alpha system with direct access to its system disk. OpenVMS VAX software installations and upgrades must be performed using a VAX system with direct access to its system disk. o Ethernet LANs and the protocols that use them must conform to the IEEE 802.2 and IEEE 802.3 standards. Ethernet LANs must also sup- port Ethernet Version 2.0 packet formats. o FDDI LANs and the protocols that use them must conform to the IEEE 802.2, ANSI X3.139-1987, ANSI X3.148-1988, and ANSI X3.166-1990 stan- dards. 10 o LAN segments can be bridged to form an extended LAN (ELAN). The ELAN must conform to IEEE 802.1D, with the following restrictions: - All LAN paths used for OpenVMS Cluster communication must op- erate with a nominal bandwidth of at least 10 megabits per sec- ond. - The ELAN must be capable of delivering packets that use the padded Ethernet Version 2.0 packet format and the FDDI SNAP/SAP packet format. - The ELAN must be able to deliver packets with a maximum data field length of at least 1080 bytes.[1] - The maximum number of bridges between any two end nodes is seven. - The maximum transit delay through any bridge must not exceed two seconds. - The ELAN must provide error-detection capability between end nodes that is equivalent to that provided by the Ethernet and FDDI data link frame-check sequences. o The average packet-retransmit timeout ratio for OpenVMS Cluster traf- fic on the LAN from any system to another must be less than 1 time- out in 1000 transmissions. Recommendations The optimal OpenVMS Cluster system configuration for any computing en- vironment is based on requirements of cost, functionality, performance, capacity, and availability. Factors that impact these requirements in- clude: o Applications in use o Number of users o Number and models of systems ____________________ In the padded Ethernet format, the data field follows the 2-byte length field. These two fields together comprise the LLC data field in the 802.3 format. 11 o Interconnect and adapter throughput and latency characteristics o Disk and tape I/O capacity and access time o Number of disks and tapes being served o Interconnect utilization Compaq recommends OpenVMS Cluster system configurations based on its experience with the OpenVMS Cluster Software product. The customer should evaluate specific application dependencies and performance require- ments to determine an appropriate configuration for the desired com- puting environment. When planning an OpenVMS Cluster system, consider the following rec- ommendations: o OpenVMS Cluster systems should be configured using interconnects that provide appropriate performance for the required system us- age. In general, use the highest-performance interconnect possi- ble. CI and Memory Channel are the preferred interconnects between powerful systems. o Although OpenVMS Cluster systems can include any number of system disks, consider system performance and management overhead in de- termining their number and location. While the performance of con- figurations with multiple system disks may be higher than with a single system disk, system management efforts increase in propor- tion to the number of system disks. o Data availability and I/O performance are enhanced when multiple OpenVMS Cluster systems have direct access to shared storage; when- ever possible, configure systems to allow direct access to shared storage in favor of OpenVMS MSCP served access. Multiaccess CI, DSSI, SCSI, and Fibre Channel storage provides higher data availability than singly accessed, local adapter-based storage. Additionally, dual pathing of disks between local or HSC/HSJ/HSD/HSZ/HSG stor- age controllers enhances data availability in the event of controller failure. 12 o OpenVMS Cluster systems can enhance availability by utilizing re- dundant components, such as additional systems, storage controllers, disks, and tapes. Extra peripheral options, such as printers and terminals, can also be included. Multiple instances of all Open- VMS Cluster interconnects (CI, Memory Channel, DSSI, SCSI, Ether- net, ATM, Gigabit Ethernet, Fibre Channel and FDDI) are supported. o To enhance resource availability, OpenVMS Clusters that implement satellite booting should use multiple boot servers. When a server fails in configurations that include multiple servers, satellite access to multipath disks will fail over to another path. Disk servers should be the most powerful systems in the OpenVMS Cluster and should use the highest bandwidth LAN adapters available. o The performance of an FDDI LAN varies with each configuration. When an FDDI is used for OpenVMS Cluster communications, the ring la- tency when the FDDI ring is idle should not exceed 400 microsec- onds. This ring latency translates to a cable distance between end nodes of approximately 40 kilometers. o The ELAN must provide adequate bandwidth, reliability, and low de- lay to optimize the operation of the OpenVMS Cluster. There are in- depth configuration guidelines for these ELAN environments in the OpenVMS documentation set, which are frequently updated as the tech- nology area evolves. For specific configuration information, re- fer to the following manuals: - OpenVMS Cluster Systems - Guidelines for OpenVMS Cluster Configurations o The RAID level 1 storage functionality of Compaq Volume Shadowing for OpenVMS provides the following advantages: - Enhanced data availability in the event of disk failure - Enhanced read performance with multiple shadow-set members For more information, refer to the Compaq Volume Shadowing for Open- VMS Software Product Description (SPD 27.29.xx). 13 o The Compaq DECram for OpenVMS software product can be used to cre- ate high-performance, memory-resident RAM disks. Refer to the Com- paq DECram for OpenVMS Software Product Description (SPD 34.26.xx) for additional information. DECamds and Availability Manager Features OpenVMS software incorporates the features of a real-time monitoring, investigation, diagnostic, and system management tool that can be used to improve overall cluster system availability. DECamds can be used in both clustered and nonclustered LAN environments. The DECamds availability management tool contains a console and an Open- VMS device driver. The console is a DECwindows Motif based applica- tion that allows system managers to display windows showing processes, quotas, disks, locks, memory, SCS data structures, and I/O activity in the OpenVMS Cluster. The Motif display can be directed to any X- compatible display. The driver is a data collector that runs on the monitored OpenVMS systems. Console application and driver software is provided for Alpha and VAX systems. Availability Manager is functionally similar to DECamds, but it runs on Windows-based systems and on OpenVMS Alpha. HARDWARE SUPPORT System Support Any Alpha or VAX system, as documented in the Compaq OpenVMS Operat- ing System for VAX and Alpha Software Product Description (SPD 25.01.xx), can be used in an OpenVMS Cluster. Peripheral Option and Storage Controller Support OpenVMS Cluster systems can use all peripheral options and storage sub- systems supported by OpenVMS. Refer to the Compaq OpenVMS Operating System for VAX and Alpha SPD for more information. Interconnect Support 14 Table 1 shows which systems are supported on which interconnects and whether the system can be booted as a satellite node over that inter- connect. All systems can service satellite boot requests over a LAN interconnect (FDDI or Ethernet). Note: Levels of interconnect support and LAN booting capabilities are continually being increased. In many cases, these additional capabil- ities result from hardware option and system console microcode enhance- ments and are not dependent on OpenVMS software. Refer to the appro- priate hardware option and system documentation for the most up-to- date information. LAN Support OpenVMS Cluster systems can use all Ethernet (10 Mb/sec and 100 Mb/sec) and FDDI LAN adapters supported by OpenVMS for access to Ethernet and FDDI interconnects. Any number of LAN adapters can be configured in any combination (with the exception that a Q-bus can be configured with only one FDDI adapter). Refer to the Compaq OpenVMS Operating System for VAX and Alpha Software Product Description for more information. Gigabit Ethernet LAN adapters can be used for limited OpenVMS Clus- ter interconnect capability for Version 7.1-2 through Version 7.2-xx. OpenVMS Version 7.3 clusters provide more robust support for Gigabit Ethernet and ATM emulated LAN Ethernet connections. Additionally, Open- VMS Version 7.3 also allows for load distribution of SCS cluster com- munications traffic across multiple, parallel LAN connections between cluster nodes. Refer to the release notes for your OpenVMS operating system version for specific limitations on these interconnects. The DEFZA FDDI adapter is supported on VAX systems only. Note: VAX systems cannot be booted over an FDDI. 15 ___________________________________________________________________ Table_1:___________________________________________________________ ATM,[3] Memory Eth- Chan- er- Fibre Chan- System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________ AlphaServerYes[4]Yes Yes[5] Yes Yes+Sat[Yes Yes GS 80/160/320, GS60/140, 8200, 8400 AlphaServerYes Yes Yes Yes Yes+Sat Yes Yes ES40, 4000, 4100 ___________________________________________________________________ [1]Version 7.1 and higher only. [2]This column refers to multihost SCSI connectivity. Refer to the appropriate system documentation for information regarding single-host connectivity to SCSI buses. [3]ATM using an emulated LAN configuration can be used as a cluster interconnect on all AlphaServer systems, except for Al- phaServer 300 and 400 systems. ATM is not supported on the DEC- series systems listed nor on VAX systems. [4]Each "Yes" means that this system is supported on this inter- connect but cannot be booted as a satellite over this intercon- nect. [5]DSSI is not supported on GS-Series AlphaServers [6]Each "Yes+Sat" means that this system is supported on this interconnect and can be booted as a satellite node over this in- terconnect. 16 ___________________________________________________________________ ATM,[3] Memory Eth- Chan- er- Fibre Chan- System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________ AlphaServerYes Yes Yes Yes Yes+Sat Yes+Sat Yes[7] 1200, 2000, 2100, 2100A AlphaServer- Yes Yes Yes Yes+Sat Yes+Sat Yes[8] DS10/10L/20, 1000, 1000A AlphaServer- - Yes Yes Yes+Sat[Yes+Sat Yes[9] 400,800 AlphaServer- - - Yes Yes Yes+Sat - 300 AlphaStatio-s - - Yes Yes+Sat[Yes+Sat - DEC Yes - Yes - Yes+Sat Yes - 7000, 10000 17 ___________________________________________________________________ ATM,[3] Memory Eth- Chan- er- Fibre Chan- System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________ ___________________________________________________________________ [1]Version 7.1 and higher only. [2]This column refers to multihost SCSI connectivity. Refer to the appropriate system documentation for information regarding single-host connectivity to SCSI buses. [3]ATM using an emulated LAN configuration can be used as a cluster interconnect on all AlphaServer systems, except for Al- phaServer 300 and 400 systems. ATM is not supported on the DEC- series systems listed nor on VAX systems. [7]AlphaServer 1200 only. [8]Excludes AlphaServer 1000. [9]AlphaServer 800 only. [10]Version 7.1 and higher only. Most models provide FDDI booting capability. Refer to system-specific documentation for details. 18 ___________________________________________________________________ ATM,[3] Memory Eth- Chan- er- Fibre Chan- System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________ DEC - - Yes - Yes Yes+Sat - 4000 DEC - - - Yes Yes+Sat[Yes+Sat - 3000 DEC - - - - Yes Yes+Sat - 2000 VAX Yes - Yes - Yes Yes - 6000, 7000, 10000 19 ___________________________________________________________________ ATM,[3] Memory Eth- Chan- er- Fibre Chan- System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________ VAX Yes - - - - Yes - 8xxx, 9xxx, 11/xxx VAX - - Yes - Yes Yes+Sat - 4xxx[12] VAX - - - - - Yes+Sat - 2xxx, 3xxx[12] ___________________________________________________________________ [1]Version 7.1 and higher only. [2]This column refers to multihost SCSI connectivity. Refer to the appropriate system documentation for information regarding single-host connectivity to SCSI buses. [3]ATM using an emulated LAN configuration can be used as a cluster interconnect on all AlphaServer systems, except for Al- phaServer 300 and 400 systems. ATM is not supported on the DEC- series systems listed nor on VAX systems. [11]Using DEFTA only. [12]Some models may provide slightly different interconnect sup- port. Refer to system-specific documentation for details. ___________________________________________________________________ CI Support OpenVMS Cluster systems can be configured with multiple CI adapters. Table 2 shows the types of adapters that are supported by each sys- tem. There can be only one type of adapter configured in a system (with the exception that, with OpenVMS Version 7.1, CIXCD and CIPCA adapters can be configured together in the same system). The maximum number of 20 each type is noted in the table. The CI adapters in a system can con- nect to the same or different star couplers. Note: The CIBCA-A adapter cannot coexist with a KFMSA adapter on the same system. Note: The CIBCA-A and CIBCA-B are different. ___________________________________________________________________ Table_2:___________________________________________________________ System - CIxxx_______750___780___BCI___BCA-A____BCA-B____XCD____PCA_________ AlphaServer - - - - - 10 10,26[1] GS, 8400 AlphaServer - - - - - - 10,26[1] 8200 AlphaServer - - - - - - 3[2] ES, 4000, 4100 AlphaServer - - - - - - 6[3] 4000 + I/O ex- pansion AlphaServer - - - - - - 3 DS, 2100A, 1200 ___________________________________________________________________ [1]The two numbers represent the support limits for Version 6.2- 1H3 and Version 7.1 and higher, respectively. [2]For three CIPCAs, one must be CIPCA-AA and two must be CIPCA- BA. [3]Only three can be CIPCA-AA. 21 ___________________________________________________________________ System - CIxxx_______750___780___BCI___BCA-A____BCA-B____XCD____PCA_________ AlphaServer - - - - - - 2[4] 2000, 2100 DEC - - - - - 10 - 7000, 10000 VAX 1 - - - - - - 11/750 VAX - 1 - - - - - 11/780, 11785 VAX 6000 - - - 1 4 4 - VAX - - 1 1 1 - - 82xx, 83xx VAX 86xx - 2 - - - - - VAX - - 1 1 2 - - 85xx, 8700, 88xx VAX 9000 - - - - - 6 - VAX - - - - - 10 - 7000, 10000 ___________________________________________________________________ [4]Only one can be a CIPCA-BA. ___________________________________________________________________ Observe the following guidelines when configuring CIPCA adapters: o The CIPCA adapter can coexist on a CI bus with CIXCD and CIBCA-B CI adapters and all variants of the HSC/HSJ controller except the HSC50. Other CI adapters cannot be configured on the same CI bus 22 as a CIPCA. HSC40/70 controllers must be configured with a Revi- sion F (or higher) L109 module. o The CIPCA-AA adapter occupies a single PCI backplane slot and a sin- gle EISA backplane slot. o The CIPCA-BA adapter occupies two PCI backplane slots. Star Coupler Expander A CI star coupler expander (CISCE) can be added to any star coupler to increase its connection capacity to 32 ports. The maximum number of systems that can be connected to a star coupler is 16, regardless of the number of ports. Memory Channel Support (Version 7.1 and higher only) Memory Channel is supported on all AlphaServer systems starting with the AlphaServer 1000. Observe the following rules when configuring Mem- ory Channel: o A maximum of eight systems can be connected to a single Memory Chan- nel interconnect. o Systems configured with Memory Channel adapters require a minimum of 128 megabytes of memory. o A maximum of two Memory Channel adapters can be configured in a sys- tem. Configuring two Memory Channel interconnects can improve the availability and performance of the cluster configuration. Only one Memory Channel adapter may be configured in an AlphaServer 8xxx DWLPA I/O channel configured with any other adapter or bus option. This restriction does not apply to the DWLPB I/O channel, or to DWLPA I/O channels that have no other adapters or bus options. o Multiple adapters in a system cannot be connected to the same Mem- ory Channel hub. o Memory Channel adapters must all be the same version. Specifically, a Memory Channel V1.5 adapter cannot be mixed with a Memory Chan- nel V2.0 adapter within the same connection. 23 DSSI Support Any mix of Alpha and VAX DSSI adapters can be configured on a common DSSI bus (except where noted in the following list). Refer to the ap- propriate hardware manuals for specific adapter and configuration in- formation. The following points provide general guidelines for con- figurations: o Configure the AlphaServer systems shown in Table 1 with KFPSA (PCI to DSSI) adapters. The KFPSA is the highest-performance DSSI adapter and is recommended wherever possible. o Other supported adapters include: - KFESB (EISA to DSSI) for all AlphaServer systems except 4xxx and 8xxx models - KFESA (EISA to DSSI) for AlphaServer 2100 systems - KFMSB for Alpha XMI systems - KFMSA for VAX XMI systems - KFQSA for VAX Q-bus systems o KFMSB adapters and KFPSA adapters cannot be configured on the same DSSI bus. o Up to 24 KFPSAs can be configured on a system. o Up to 6 KFMSA/Bs can be configured on an XMI bus. o Up to 12 KFMSA/Bs can be configured in a system. o Up to four KFESBs can be configured on a system. o Up to two KFESAs can be configured on a system. o A mix of one KFESB and one KFESA can be configured on a system. o Because the DEC 4000 DSSI adapter terminates the DSSI bus, only two DEC 4000s can be configured on a DSSI. 24 o Some of the new generation AlphaServer processors will support DSSI. The GS series and the DS20 series will have support. Other DS se- ries and the ES series will not. Multihost SCSI Support Compaq OpenVMS Cluster Software provides support for multihost SCSI configurations using Alpha systems and SCSI adapters, devices, and con- trollers. Table 1 shows which systems can be configured on a multi- host SCSI bus. Any AlphaStation or AlphaServer system that supports optional KZPSA (fast-wide differential) or KZPBA-CB (ultrawide differential; Version 7.1-1H1 and higher only) adapters can use them to connect to a mul- tihost SCSI bus. Refer to the appropriate system documentation for sys- tem specific KZPSA and KZPBA support information. Single-host Ultra SCSI connections with either the KZPBA-CA (ultrawide single-channel adapter) or the KZPBA-CB (ultrawide differential adapter) are supported in Version 6.2-H3 and higher. Also, any AlphaStation or AlphaServer system except the AlphaServer 4000, 4100, 8200, and 8400 can use embedded NCR-810-based SCSI adapters or optional KZPAA adapters to connect to a multihost SCSI bus. Additionally, DEC 3000 systems can use optional KZTSA (fast-wide dif- ferential) adapters to connect to a multihost SCSI bus. Note: A wide range of SCSI adapters can be used to connect to a single- host SCSI bus. For further information about the complete range of SCSI support, refer to the Compaq OpenVMS Operating System for VAX and Al- pha Software Product Description. Compaq recommends optional adapters for connection to multihost buses. Use of optional adapters simplifies SCSI cabling and also leaves the embedded system adapter available for tape drives, floppies, and CD- ROMs. Multihost SCSI configurations can include DWZZA/DWZZB single-ended SCSI to differential SCSI converters. 25 Multihost SCSI buses can be configured with any appropriately compli- ant SCSI-2 or SCSI-3 disk. Disks must support the following three fea- tures: o Multihost support o Tagged command queueing o Automatic bad block revectoring These SCSI disk requirements are fully documented in the Guidelines for OpenVMS Cluster Configurations manual. In general, nearly all disk drives available today, from Compaq or third-party suppliers, support these features. Known exceptions to the range of Compaq drives are the RZ25 and RZ26F, which do not support tagged command queueing. Tape drives, floppy disks, and CD-ROMs cannot be configured on mul- tihost SCSI buses. Configure these devices on single-host SCSI buses. HSZ series storage controllers can be configured on a mulithost SCSI bus. Refer to the appropriate HSZ storage controller documentation for configuration information. Note that it is not possible to configure tape drives, floppy disks, or CD-ROMs on HSZ controller storage buses when the HSZ is connected to a multihost SCSI bus. Multihost SCSI buses must adhere to all SCSI-2 or SCSI-3 specifica- tions. Rules regarding cable length and termination must be adhered to carefully. Refer to the SCSI-2 or SCSI-3 specification or the Guide- lines for OpenVMS Cluster Configurations manual for further informa- tion. Fibre Channel Storage Support Beginning with Version 7.2-1, Compaq OpenVMS Cluster Software provides support for multihost Fibre Channel storage configurations using Al- pha systems and Fibre Channel adapters, switches, and controllers. Direct- attached Fibre Channel storage and Arbitrated Loop Fibre Channel con- figurations are not supported. For the current configuration guide- lines and limitations, refer to the Guidelines for OpenVMS Cluster Con- figurations manual. This manual outlines the specific requirements for the controller (HSG80 and HSG60), switch, and adapter (KGPSA-**), and 26 for the disks that can be attached to this configuration. The number of hosts, adapters, switches, and distance between these items, is con- stantly being increased, so refer to the manual for the up-to-date in- formation on this evolving area. Starting with OpenVMS Version 7.3, SCSI tape devices can be connected to a Fibre Channel storage environment with the use of a Compaq Mod- ular Data Router (MDR) bridge product. This bridge allows these tape devices to be placed behind the Fibre Channel switch environment, and to be shared via the same methodologies that the Fibre Channel disks in the same fabric. Because the support for Fibre Channel is currently limited to stor- age only, a second interconnect for node-to-node communications must be present for the full clustered capability to be utilized. DECamds Console Compaq recommends that the DECamds console run on a standalone work- station with a color monitor. However, it can also run on a worksta- tion that is configured as an OpenVMS Cluster member, or on a nonwork- station system using DECwindows to direct the display to an X-based display. SOFTWARE REQUIREMENTS Compaq OpenVMS Operating System Refer to the Compaq OpenVMS Operating System for VAX and Alpha Soft- ware Product Description (SPD 25.01.xx) for more information. The ability to have more than one version of OpenVMS in an OpenVMS Clus- ter allows upgrades to be performed in a staged fashion so that con- tinuous OpenVMS Cluster system operation is maintained during the up- grade process. Only one version of OpenVMS can exist on any system disk; multiple versions of OpenVMS in an OpenVMS Cluster require multiple system disks. Also, system disks are architecture specific: OpenVMS 27 Alpha and OpenVMS VAX cannot coexist on the same system disk. The co- existence of multiple versions of OpenVMS in an OpenVMS Cluster con- figuration is supported according to the following conditions: o Warranted support is provided for mixed-architecture OpenVMS Clus- ter systems in which all Alpha and VAX systems are running the same version of OpenVMS-Version 6.2-xxx, Version 7.0, Version 7.1-xxx, Version 7.2-xxx, or Version 7.3. Warranted support means that Compaq has fully qualified the two ar- chitectures coexisting in a OpenVMS Cluster and will answer any prob- lems identified by customers using these configurations. o Migration support is provided for OpenVMS Cluster systems running two versions of the OpenVMS operating system. These versions can be: - Any mix of Version 7.3, Version 7.2-1xx, Version 7.2, Version 7.1-2, Version 7.1-1Hx, and Version 7.1. - Any mix of Version 7.2, Version 7.1-xxx, and Version 6.2-xxx. - Any mix of Version 7.1, Version 7.0, and Version 6.2-xxx. - Any mix of Version 6.2-xxx with OpenVMS VAX Version 5.5-2, Ver- sion 6.0, Version 6.1 and OpenVMS Alpha Version 1.5, Version 6.0, Version 6.1. Migration support means that Compaq has qualified the two archi- tectures and versions for use together in configurations that are migrating in a staged fashion to a higher version of OpenVMS or to Alpha systems. Compaq will answer problem reports submitted about these configurations. However, in exceptional cases, Compaq may rec- ommend that you move your system to a warranted configuration as part of the solution. Note: Compaq does not support the use of more than two versions of Open- VMS software in the same OpenVMS Cluster at the same time. However, in many cases, running more than two versions or mixing versions not described above will operate satisfactorily. 28 Compaq recommends that all Alpha and VAX systems in a OpenVMS Clus- ter run the latest version of OpenVMS. DECnet software DECnet software is not required in an OpenVMS Cluster configuration. However, DECnet software is necessary for internode process-to-process communication that uses DECnet mailboxes. The OpenVMS Version 6.2-1H3 Monitor utility uses DECnet for intraclus- ter communication. The OpenVMS Version 7.1 (and higher) Monitor utility uses TCP/IP or DECnet based transports, as appropriate, for intracluster communica- tion. Refer to the appropriate DECnet Software Product Description for fur- ther information. DECamds DECamds requires Compaq DECwindows Motif for OpenVMS. Refer to the Com- paq DECwindows Motif for OpenVMS Software Product Description (SPD 42.19.xx) for details. OPTIONAL SOFTWARE For information about OpenVMS Cluster support for optional software products, refer to the OpenVMS Cluster Support section of the Soft- ware Product Descriptions for those products. Optional products that may be useful in OpenVMS Cluster systems in- clude: o Compaq Volume Shadowing for OpenVMS (SPD 27.29.xx) o Compaq RAID Software for OpenVMS (SPD 46.49.xx) o Compaq DECram for OpenVMS (SPD 34.26.xx) o VAXcluster Console System (SPD 27.46.xx) 29 GROWTH CONSIDERATIONS The minimum hardware and software requirements for any future version of this product may be different than the requirements for the cur- rent version. DISTRIBUTION MEDIA OpenVMS Cluster Software is distributed on the same distribution me- dia as the OpenVMS Operating System. Refer to the OpenVMS Operating System for VAX and Alpha SPD for more information. ORDERING INFORMATION OpenVMS Cluster Software is orderable as follows: Every server (nonclient) Alpha system in an OpenVMS Cluster config- uration requires: o VMScluster Software for OpenVMS Alpha - Software Licenses: QL-MUZA*-AA - Software Product Services: QT-MUZA*-** - LMF PAK Name: VMSCLUSTER Note: Compaq VMScluster Software for OpenVMS Alpha provides a unique ordering and pricing model for single-CPU and dual-CPU capable sys- tems. Specifically, all AlphaServer DS-series systems, along with Al- phaServer 800 and 1200 systems, should use the QL-MUZAC-AA license or- der number; for service, use the corresponding QT-MUZAC-** order num- ber. For all remaining AlphaServer systems in the Workgroup system class (such as the ES40), use the standard QL-MUZAE-AA license order num- ber; for service, use the corresponding QT-MUZAE-** order number. VMScluster pricing and ordering for the remaining system classes of AlphaServers are unchanged. 30 Every server (nonclient) VAX system in an OpenVMS Cluster configura- tion requires: o VAXcluster Software for OpenVMS VAX - Software Licenses: QL-VBRA*-AA - Software Product Services: QT-VBRA*-** - LMF PAK Name: VAXCLUSTER OpenVMS Cluster Client Software is available as part of the NAS150 prod- uct. It is also separately orderable for DS-Series AlphaServers. o VMScluster Client Software for OpenVMS Alpha - Software Licenses: QL-3MRA*-AA - Software Migration Licenses: QL-6J7A*-AA - Software Product Services: QT-3MRA*-** - LMF PAK Name: VMSCLUSTER-CLIENT * Denotes variant fields. For additional information on available li- censes, services, and media, refer to the appropriate price book. The right to the functionality of the DECamds and Availability Man- ager availability management software is included in all the licenses in the preceding list. DOCUMENTATION The following manuals are included in the OpenVMS hardcopy documen- tation as part of the full documentation set: o OpenVMS Cluster Systems o Guidelines for OpenVMS Cluster Configurations o DECamds User's Guide o Availability Manager User's Guide 31 Refer to the Compaq OpenVMS Operating System for VAX and Alpha Soft- ware Product Description for additional information about OpenVMS doc- umentation and how to order it. Specific terms and conditions regarding documentation on media apply to this product. Refer to Compaq's terms and conditions of sale, as follows: "A software license provides the right to read and print software doc- umentation files provided with the software distribution kit for use by the licensee as reasonably required for licensed use of the soft- ware. Any hard copies or copies of files generated by the licensee must include Compaq's copyright notice. Customization or modifications, of any kind, to the software documentation files are not permitted. Copies of the software documentation files, either hardcopy or machine readable, may only be transferred to another party in conjunction with an approved relicense by Compaq of the software to which they relate." SOFTWARE LICENSING This software is furnished under the licensing provisions of Compaq Computer Corporation's Standard Terms and Conditions. For more infor- mation about Compaq's licensing terms and policies, contact your lo- cal Compaq office. License Management Facility Support The OpenVMS Cluster Software product supports the OpenVMS License Man- agement Facility (LMF). License units for this product are allocated on an Unlimited System Use basis. For more information about the License Management Facility, refer to the OpenVMS Operating System for VAX and Alpha Software Product De- scription (SPD 25.01.xx) or documentation set. 32 SOFTWARE PRODUCT SERVICES A variety of service options are available from Compaq. For more in- formation, contact your local Compaq office. SOFTWARE WARRANTY This software is provided by Compaq with a 90 day conformance warranty in accordance with the Compaq warranty terms applicable to the license purchase. The above information is valid at time of release. Contact your lo- cal Compaq office for the most up-to-date information. © 2001 Compaq Computer Corporation AlphaServer, AlphaStation, Compaq, Digital, HSC, HSJ, HSZ, MicroVAX, StorageWorks, VAX, VMS, and the Compaq logo Registered in U.S. Patent and Trademarks Office. DECnet, OpenVMS, and UNIBUS are trademarks of Compaq Information Tech- nolgies Group, L.P. in the United States and other countries. Motif is a trademark of The Open Group in the United States and other countries. Confidential computer software. Valid license from Compaq required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Tech- nical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Compaq shall not be liable for technical or editorial errors or omis- sions contained herein. The information in this document is provided "as is" without warranty of any kind and is subject to change with- out notice. The warranties for Compaq products are set forth in the express limited warranty statements accompanying such products. Noth- ing herein should be construed as constituting an additional warranty. 33 34