Software Product Description ___________________________________________________________________ PRODUCT NAME: HP OpenVMS Cluster Software SPD 29.78.25 This Software Product Description describes Versions 6.2-1H3, 7.1-1H1, 7.1-1H2, 7.1-2, 7.2, 7.2-1, 7.2-1H1, 7.2-2, 7.3, 7.3-1, 7.3-2, and 8.2 of the following products: o HP VMScluster Software for OpenVMS Integrity Servers o HP VMScluster Software for OpenVMS Alpha o HP VAXcluster Software for OpenVMS VAX (through V7.3) o HP OpenVMS Cluster Client Software for Integrity Servers o HP OpenVMS Cluster Client Software for Alpha (part of NAS150) o HP OpenVMS Cluster Client Software for VAX (through V7.3) (part of NAS150) Except where noted, the features described in this SPD apply equally to Integrity server, Alpha and VAX systems. Starting with OpenVMS Ver- sion 8.2, OpenVMS Cluster support for OpenVMS Industry Standard 64 (I64) for Integrity servers is availabile. OpenVMS Cluster Software licenses and part numbers are architecture specific; refer to the Ordering In- formation section of this SPD for further details. DESCRIPTION OpenVMS Cluster Software is an OpenVMS System Integrated Product (SIP). It provides a highly integrated OpenVMS computing environment distributed over multiple I64, Alpha or VAX systems, or a mix of Alpha and VAX or Alpha and I64 systems. In this SPD, this environment is referred to as an OpenVMS Cluster. January 2005 Systems in an OpenVMS Cluster system can share processing, mass stor- age (including system disks), and other resources under a single OpenVMS security and management domain. Within this highly integrated envi- ronment, systems retain their independence because they use local, memory- resident copies of the OpenVMS operating system. Thus, OpenVMS Clus- ter systems can boot and shut down independently while benefiting from common resources. Applications running on one or more systems in an OpenVMS Cluster sys- tem can access shared resources in a coordinated manner. OpenVMS Clus- ter software components synchronize access to shared resources, al- lowing multiple processes on any system in the OpenVMS Cluster to per- form coordinated, shared data updates. Because resources are shared, OpenVMS Cluster systems offer higher avail- ability than standalone systems. Properly configured OpenVMS Cluster systems can withstand the shutdown or failure of various components. For example, if one system in an OpenVMS Cluster is shut down, users can log in to another system to create a new process and continue work- ing. Because mass storage can be shared clusterwide, the new process is able to access the original data. Applications can be designed to survive these events automatically. All OpenVMS Cluster systems have the following software features in common: o The OpenVMS operating system and OpenVMS Cluster software allow all systems to share read and write access to disk files in a fully co- ordinated environment. Application programs can specify the level of clusterwide file sharing that is required; access is then co- ordinated by the OpenVMS extended QIO processor (XQP) and Record Management Services (RMS). Coherency of multiple-system configu- rations is implemented by OpenVMS Cluster software using a flex- ible and sophisticated per-system voting mechanism. o Shared batch and print queues are accessible from any system in the OpenVMS Cluster system. The OpenVMS queue manager controls clus- terwide batch and print queues, which can be accessed by any sys- tem. Batch jobs submitted to clusterwide queues are routed to any available system so the batch load is shared. 2 o The OpenVMS Lock Manager System Services operate in a clusterwide manner. These services allow reliable, coordinated access to any resource, and provide signaling mechanisms at the system and pro- cess level across the whole OpenVMS Cluster system. o All disks and tapes in an OpenVMS Cluster system can be made ac- cessible to all systems. o Process information and control services, including the ability to create and delete processes, are available on a clusterwide basis to application programs and system utilities. (Clusterwide process creation is available with Version 7.1 and higher.) o Configuration command procedures assist in adding and removing sys- tems and in modifying their configuration characteristics. o The dynamic Show Cluster utility displays the status of OpenVMS Clus- ter hardware components and communication links. o A fully automated clusterwide data and application caching feature enhances system performance and reduces I/O activity. o The ability to define logical names that are visible across mul- tiple nodes in an OpenVMS Cluster (Version 7.2 and higher). o An application programming interface (API) allows applications within multiple OpenVMS Cluster nodes to communicate with each other (Ver- sion 7.2 and higher). o Standard OpenVMS system management and security features work in a clusterwide manner so that the entire OpenVMS Cluster system op- erates as a single security and management domain. o The OpenVMS Cluster software dynamically balances the interconnect I/O load in OpenVMS Cluster configurations that include multiple interconnects. o Multiple OpenVMS Cluster systems can be configured on a single or extended local area network (LAN). LANs and the LAN adapters used for OpenVMS Cluster communications can be used concurrently by other network protocols. 3 o The optionally installable HP Availability Manager (as well as the DECamds availability management tool) allows system managers to mon- itor and manage resource availability in real time on all the mem- bers of an OpenVMS Cluster. o Cross-architecture satellite booting permits VAX boot nodes to pro- vide boot service to Alpha satellites and allows Alpha boot nodes to provide boot service to VAX satellites. For V8.2, satellite boot support on I64 systems is not yet available. o System services enable applications to automatically detect changes in OpenVMS Cluster membership. Definitions The following terms are used frequently throughout this SPD: o Boot node - A system that is both a Maintenance Operations Proto- col (MOP) server and a disk server. A boot node can fully service satellite boot requests. o System - An Integrity server family, Alpha family or VAX family com- puter running the OpenVMS operating system. A system comprises one or more processors and operates as an OpenVMS Cluster node. An Open- VMS Cluster node can be referred to as an OpenVMS Cluster member. o Disk server - A system that uses the OpenVMS MSCP server to make disks to which it has direct access available to other systems in the OpenVMS Cluster system. o HSC, HSJ - Intelligent mass storage controller subsystems that con- nect to the CI bus. o HSD - An intelligent mass storage controller subsystem that con- nects to the DSSI bus. o HSG, HSV/EVA, MSA, XP - Intelligent mass storage controller sub- systems that connect to the Fibre Channel bus. o HSZ - An intelligent mass storage controller subsystem that con- nects to the SCSI bus. 4 o MDR (Modular Data Router) - Fibre Channel to SCSI bridge allowing SCSI tape devices to be used behind a Fibre Channel switch. o NSR (Network Storage Router) - Fibre Channel to SCSI bridge allow- ing SCSI tape devices to be used behind a Fibre Channel switch. o Maintenance Operations Protocol (MOP) server - A system that ser- vices satellite boot requests to provide the initial LAN downline load sequence of the OpenVMS operating system and OpenVMS Cluster software. At the end of the initial downline load sequence, the satel- lite uses a disk server to perform the remainder of the OpenVMS boot- ing process. o Mixed-architecture OpenVMS Cluster system - An OpenVMS Cluster sys- tem that is configured with Alpha and VAX systems or Alpha and In- tegrity server systems. o MSCP (mass storage control protocol) - A message-based protocol for controlling Digital Storage Architecture (DSA) disk storage sub- systems. The protocol is implemented by the OpenVMS DUDRIVER de- vice driver. o Multihost configuration - A configuration in which more than one system is connected to a single CI, DSSI, SCSI, or Fibre Channel interconnect. o Satellite - A system that is booted over a LAN using a MOP server and disk server. o Single-host configuration - A configuration in which a single sys- tem is connected to a CI, DSSI, SCSI, or Fibre Channel intercon- nect. o Star coupler - A common connection point for all CI connected sys- tems and HSC and HSJ controllers. o Tape server - A system that uses the OpenVMS TMSCP server to make tapes to which it has direct access available to other systems in the OpenVMS Cluster system. o TMSCP (tape mass storage control protocol) - A message-based pro- tocol for controlling DSA tape-storage subsystems. The protocol is implemented by the OpenVMS TUDRIVER device driver. 5 o Vote - Systems in an OpenVMS Cluster system can be configured to provide votes that are accumulated across the multi-system envi- ronment. Each system is provided with knowledge of how many votes are necessary to meet a quorum before distributed shared access to resources is enabled. An OpenVMS Cluster system must be configured with at least one voting system. OpenVMS Cluster Client Software OpenVMS Cluster configurations can be configured with systems that op- erate and are licensed explicitly as client systems. OpenVMS Cluster Client licensing is provided as part of the NAS150 layered product. An individually available license for DS-series AlphaServers is also provided. OpenVMS Cluster Client systems contain full OpenVMS Clus- ter functionality as described in this SPD, with the following excep- tions: o Client systems cannot provide votes toward the operation of the Open- VMS Cluster system. o Client systems cannot MSCP serve disks or TMSCP serve tapes. Interconnects OpenVMS Cluster systems are configured by connecting multiple systems with a communications medium, referred to as an interconnect. Open- VMS Cluster systems communicate with each other using the most appro- priate interconnect available. In the event of interconnect failure, OpenVMS Cluster software automatically uses an alternate interconnect whenever possible. OpenVMS Cluster software supports any combination of the following interconnects: o CI (computer interconnect) (Alpha and VAX) o DSSI (Digital Storage Systems Interconnect) (Alpha and VAX) o SCSI (Small Computer Storage Interconnect) (storage only, Alpha only) o FDDI (Fiber Distributed Data Interface) (Alpha and VAX) o Ethernet (10/100, Gigabit) (I64, Alpha and VAX) 6 o Asynchronous transfer mode (ATM) (emulated LAN configurations only, Alpha only) o Memory Channel (Version 7.1 and higher only, Alpha only) o Fibre Channel (storage only, Version 7.2-1 and higher only, I64 and Alpha only) CI and DSSI are highly optimized, special-purpose interconnects for systems and storage subsystems in OpenVMS Cluster configurations. CI and DSSI provide both system-to-storage communication and system-to- system communication. SCSI is an industry-standard storage interconnect. Multiple systems can be configured on a single SCSI bus, thereby providing multihost access to SCSI storage devices. Note that the SCSI bus is not used for system-to-system communication. Consequently, systems connected to a multihost SCSI bus must also be configured with another interconnect to provide system-to-system communication. Fibre Channel is an industry standard interconnect for storage and com- munications. Support by OpenVMS Version 7.2-1 (and higher) allows for a storage-only interconnect in a multihost environment utilizing Fi- bre Channel switched topologies. Starting with Version 7.2-2, for SCSI tapes utilizing the Modular Data Router bridge or the Network Stor- age Router bridge are supported. As is true with SCSI, systems con- nected to a multihost Fibre Channel bus must also be configured with another interconnect to provide system-to-system communication. Ethernet, ATM, and FDDI are industry-standard, general-purpose com- munications interconnects that can be used to implement a local area network (LAN). Except where noted, OpenVMS Cluster support for these LAN types is identical. The ATM device must be used as an emulated LAN configured device. Ethernet and FDDI provide system-to-system commu- nication. Storage can be configured in FDDI environments that support FDDI-based storage servers. OpenVMS Cluster configurations can be configured using wide area net- work (WAN) infrastructures, such as DS3, E3, and ATM. Connection to these media is achieved by the use of WAN interswitch links (ISLs). 7 Memory Channel is a high-performance interconnect that provides system- to-system communication. Memory Channel does not provide direct ac- cess to storage, so a separate storage interconnect is required in Mem- ory Channel configurations. Configuration Rules o Mixed-architecture clusters are limited to two types of architecture- based configurations. The first is the long-standing VAX and Al- pha mixed clusters. The second is a mixed Alpha and Integrity server cluster. These are fully supported configurations. o Support of a VAX server with an Integrity server system in a mixed- architecture cluster (with or without AlphaServers included) is not a formally supported configuration for production environments. This type of configuration, however, can be used temporarily as an in- terim step for the purposes of development and migration, as ap- plications are moved from VAX to either an Alpha or Integrity server platform. Should a problem arise while using this type of config- uration, the customer will be advised to either revert back to their VAX-Alpha cluster environment, or remove the VAX from the cluster that contains Integrity server systems. o The maximum number of systems supported in an OpenVMS Cluster sys- tem is 96. For V8.2 configurations containing Integrity Server sys- tems, the count is limited to 16 nodes, with an equal mix of Al- pha and I64 systems. o Every system in an OpenVMS Cluster system must be connected to ev- ery other system via any supported OpenVMS Cluster interconnect (see Table 1). o VAX-11/7xx, VAX 6000, VAX 7000, VAX 8xxx, VAX 9000, and VAX 10000 series systems require a system disk that is accessed via a local adapter or through a local CI or DSSI connection. These systems can- not be configured to boot as satellite nodes. 8 o All systems connected to a common CI, DSSI, or Memory Channel in- terconnect must be configured as OpenVMS Cluster members. OpenVMS Cluster members configured on a CI, DSSI, or Memory Channel will become members of the same OpenVMS Cluster (this is imposed auto- matically by the OpenVMS Cluster software). All systems connected to a multihost SCSI bus must be configured as members of the same OpenVMS Cluster. o An OpenVMS Cluster system can include any number of star couplers. Table 2 shows the number of CI adapters supported by different sys- tems. The number of star couplers that a system can be connected to is limited by the number of adapters with which it is config- ured. o The maximum number of systems that can be connected to a star cou- pler is 16, regardless of star coupler size. o The KFQSA Q-bus to DSSI adapter does not support system-to-system communication across the DSSI; systems using this adapter must in- clude another interconnect for system-to-system communication. o The maximum number of systems that can be connected to a DSSI is four, regardless of system or adapter type. Any mix of systems and adapters is permitted, except where noted in the Hardware Support section of this SPD. Depending on the system model, it may not be possible to configure four systems on a common DSSI bus because of DSSI bus cable-length restrictions. Refer to the specific system system configuration manuals for further information. o The maximum number of systems that can be connected to a SCSI bus is 3. If the SCSI bus includes a five-port or greater Fair Arbi- tration SCSI Hub (DWZZH-05), the maximum number of systems is in- creased to 4. o The maximum number of multihost SCSI buses that a system can be con- nected to is 26. o The configuration size for Fibre Channel storage increases on a reg- ular basis with new updates to OpenVMS. As such, please refer to the (VMS_CLUSTER_CONFIG) manual for the most up-to-date configuration capabilities. 9 o Beginning with OpenVMS Version 7.2-1, Multipath Failover for both Parallel SCSI and Fibre Channel storage environments is supported. This feature allows failover from a locally connected storage path to a served path for data access. For detailed information, refer to the (VMS_CLUSTER_CONFIG) manual. o Beginning with OpenVMS Version 7.3-1, Multipath failover to the MSCP served path is supported. This feature allows failovers from phys- ical connected storage paths to the cluster served path for data access. For detailed information, refer to the (VMS_CLUSTER_ CONFIG) manual. o OpenVMS Cluster systems that are configured using WAN interconnects must adhere to the detailed line specifications described in the (VMS_CLUSTER_CONFIG) manual. The maximum system sepa- ration is 150 miles. With proper consulting support via HP Services Disaster Tolerant Consulting Services, the maximum system separa- tion is 500 miles. o A single time-zone setting must be used by all systems in an Open- VMS Cluster system. o An OpenVMS Cluster system can be configured with a maximum of one quorum disk. A quorum disk cannot be a member of an OpenVMS vol- ume set or of a shadow set created by the HP Volume Shadowing for OpenVMS product. o A system disk can contain only a single version of the OpenVMS op- erating system and is architecture specific. For example, OpenVMS Alpha Version 7.3-2 cannot coexist on a system disk with OpenVMS VAX Version 7.3. o HSJ and HSC series disks and tapes can be dual pathed between con- trollers on the same or different star couplers. The HSD30 series disks and tapes can be dual pathed between controllers on the same or different DSSI interconnects. Such dual pathing provides enhanced data availability using an OpenVMS automatic recovery capability called failover. Failover is the ability to use an alternate hard- ware path from a system to a storage device when a failure occurs on the current path. The failover process is transparent to appli- cations. Dual pathing between an HSJ or HSC and a local adapter is 10 not permitted. When two local adapters are used for dual pathing, each adapter must be located on a separate system of the same ar- chitecture. (Note: When disks and tapes are dual pathed between con- trollers that are connected to different star couplers or DSSI buses, any system connected to one of the star couplers or buses must also be connected to the other.) o Disks can be dual pathed between pairs of HSZ controllers that are arranged in a dual-redundant configuration. The controllers must be connected to the same host SCSI bus. Failover is accomplished using the HSZ transparent failover capability. o OpenVMS operating system and layered-product installations and up- grades cannot be performed across architectures. OpenVMS Alpha soft- ware installations and upgrades must be performed using an Alpha system with direct access to its system disk. OpenVMS VAX software installations and upgrades must be performed using a VAX system with direct access to its system disk. OpenVMS I64 software installa- tions and upgrades must be performed using an Integrity server sys- tem with direct access to its system disk. o Ethernet LANs and the protocols that use them must conform to the IEEE 802.2 and IEEE 802.3 standards. Ethernet LANs must also sup- port Ethernet Version 2.0 packet formats. o FDDI LANs and the protocols that use them must conform to the IEEE 802.2, ANSI X3.139-1987, ANSI X3.148-1988, and ANSI X3.166-1990 stan- dards. o LAN segments can be bridged to form an extended LAN (ELAN). The ELAN must conform to IEEE 802.1D, with the following restrictions: - All LAN paths used for OpenVMS Cluster communication must op- erate with a nominal bandwidth of at least 10 megabits per sec- ond. - The ELAN must be capable of delivering packets that use the padded Ethernet Version 2.0 packet format and the FDDI SNAP/SAP packet format. 11 - The ELAN must be able to deliver packets with a maximum data field length of at least 1080 bytes.[1] - The maximum number of bridges between any two end nodes is 7. - The maximum transit delay through any bridge must not exceed 2 seconds. - The ELAN must provide error-detection capability between end nodes that is equivalent to that provided by the Ethernet and FDDI data link frame-check sequences. o The average packet-retransmit timeout ratio for OpenVMS Cluster traf- fic on the LAN from any system to another must be less than 1 time- out in 1000 transmissions. Recommendations The optimal OpenVMS Cluster system configuration for any computing en- vironment is based on requirements of cost, functionality, performance, capacity, and availability. Factors that impact these requirements in- clude: o Applications in use o Number of users o Number and models of systems o Interconnect and adapter throughput and latency characteristics o Disk and tape I/O capacity and access time o Number of disks and tapes being served o Interconnect utilization ____________________ In the padded Ethernet format, the data field follows the 2-byte length field. These two fields together comprise the LLC data field in the 802.3 format. 12 HP recommends OpenVMS Cluster system configurations based on its ex- perience with the OpenVMS Cluster Software product. The customer should evaluate specific application dependencies and performance require- ments to determine an appropriate configuration for the desired com- puting environment. When planning an OpenVMS Cluster system, consider the following rec- ommendations: o OpenVMS Cluster systems should be configured using interconnects that provide appropriate performance for the required system us- age. In general, use the highest-performance interconnect possi- ble. Gigabit Ethernet and Memory Channel are the preferred inter- connects between powerful systems. o Although OpenVMS Cluster systems can include any number of system disks, consider system performance and management overhead in de- termining their number and location. While the performance of con- figurations with multiple system disks may be higher than with a single system disk, system management efforts increase in propor- tion to the number of system disks. o Data availability and I/O performance are enhanced when multiple OpenVMS Cluster systems have direct access to shared storage; when- ever possible, configure systems to allow direct access to shared storage in favor of OpenVMS MSCP served access. Multiaccess CI, DSSI, SCSI, and Fibre Channel storage provides higher data availability than singly accessed, local adapter-based storage. Additionally, dual pathing of disks between local or HSC/HSJ/HSD/HSZ/HSG/MSA/XP/EVA storage controllers enhances data availability in the event of con- troller failure. o OpenVMS Cluster systems can enhance availability by utilizing re- dundant components, such as additional systems, storage controllers, disks, and tapes. Extra peripheral options, such as printers and terminals, can also be included. Multiple instances of all Open- VMS Cluster interconnects (CI, Memory Channel, DSSI, Ethernet, ATM, Gigabit Ethernet, FDDI, and of all OpenVMS Cluster Storage inter- connects (SCSI and Fibre Channel) are supported. 13 o To enhance resource availability, OpenVMS Clusters that implement satellite booting should use multiple boot servers. When a server fails in configurations that include multiple servers, satellite access to multipath disks will fail over to another path. Disk servers should be the most powerful systems in the OpenVMS Cluster and should use the highest bandwidth LAN adapters available. o The performance of an FDDI LAN varies with each configuration. When an FDDI is used for OpenVMS Cluster communications, the ring la- tency when the FDDI ring is idle should not exceed 400 microsec- onds. This ring latency translates to a cable distance between end nodes of approximately 40 kilometers. o The ELAN must provide adequate bandwidth, reliability, and low de- lay to optimize the operation of the OpenVMS Cluster. There are in- depth configuration guidelines for these ELAN environments in the OpenVMS documentation set, which are frequently updated as the tech- nology area evolves. For specific configuration information, re- fer to the following manuals: - (vms_vaxcluster) - (VMS_CLUSTER_CONFIG) o The RAID level 1 storage functionality of Volume Shadowing for Open- VMS provides the following advantages: - Enhanced data availability in the event of disk failure - Enhanced read performance with multiple shadow-set members For more information, refer to the HP Volume Shadowing for Open- VMS Software Product Description (SPD 27.29.xx). o The HP DECram for OpenVMS software product can be used to create high-performance, memory-resident RAM disks. For additional infor- mation, refer to the DECram for OpenVMS Software Product Descrip- tion (SPD 34.26.xx) for additional information. 14 OpenVMS Cluster Management Tools OpenVMS software incorporates the features of a real-time monitoring, investigation, diagnostic, and system management tools that can be used to improve overall cluster system availability. HP Availability Manager HP Availability Manager is a system management tool that enables one or more OpenVMS Alpha or VAX nodes to be monitored on an extended lo- cal area network (LAN) from an OpenVMS Alpha, OpenVMS I64, or a Win- dows 2000 or XP node. This tool helps system managers and analysts tar- get a specific node or process for detailed analysis. The analysis de- tects resource availability problems and suggests corrective actions. The data analyzer does not run on OpenVMS VAX, which does not support Java. 15 HP DECamds HP DECamds is functionally similar to Availability Manager, but it runs on OpenVMS Alpha and VAX systems. SCACP Systems Communications Architecture Control Program (SCACP) is designed to monitor and manage LAN cluster communications. HARDWARE SUPPORT System Support Any Integrity server, Alpha or VAX system, as documented in the HP OpenVMS Operating System for VAX and Alpha Software Product Description (SPD 25.01.xx), can be used in an OpenVMS Cluster. Peripheral Option and Storage Controller Support OpenVMS Cluster systems can use all peripheral options and storage sub- systems supported by OpenVMS. Refer to the OpenVMS Operating System for VAX and Alpha SPD (SPD 25.01.xx) for more information. Interconnect Support Table 1 shows which systems are supported on which interconnects and whether the system can be booted as a satellite node over that inter- connect. All systems can service satellite boot requests over a LAN interconnect (FDDI or Ethernet). Note: Levels of interconnect support and LAN booting capabilities are continually being increased. In many cases, these additional capabil- ities result from hardware option and system console microcode enhance- ments and are not dependent on OpenVMS software. For the most up-to- date information, refer to the appropriate hardware option and sys- tem documentation. LAN Support 16 OpenVMS Cluster systems can use all Ethernet (10 Mb/sec, 100 Mb/sec, and 1000 Mb/sec) and FDDI LAN adapters supported by OpenVMS for ac- cess to Ethernet and FDDI interconnects. Any number of LAN adapters can be configured in any combination (with the exception that a Q-bus can be configured with only one FDDI adapter). Refer to the OpenVMS Operating System for VAX and Alpha Software Product Description (SPD 25.01.xx) for more information. Gigabit Ethernet LAN adapters can be used for limited OpenVMS Clus- ter interconnect capability for Version 7.1-2 through Version 7.2-xx. OpenVMS Version 7.3 and higher clusters provide more robust support for Gigabit Ethernet and ATM emulated LAN Ethernet connections. Ad- ditionally, OpenVMS Version 7.3 and higher also allows for load dis- tribution of SCS cluster communications traffic across multiple, par- allel LAN connections between cluster nodes. For specific limitations on these interconnects, refer to the release notes for your OpenVMS operating system version. The DEFZA FDDI adapter is supported on VAX systems only. Note: VAX systems cannot be booted over an FDDI. 17 Alpha and VAX Cluster Interconnect Support Table 18 ___________________________________________________________________ Table_1:___________________________________________________________ Memory Multi- Chan- Host ATM,[3] Fibre System_____________CI_______nel[1]_____DSSI____SCSI____FDDI________Ethernet Channel AlphaServer Yes[4] Yes Yes[5] Yes Yes+Sat[6] Yes Yes GS 80/160/320, GS60/140, GS1280, 8200, 8400 AlphaServer Yes Yes Yes Yes Yes+Sat Yes Yes ES40, ES45, ES47, ES80, 4000, 4100 AlphaServer Yes Yes Yes Yes Yes+Sat Yes+Sat Yes[7] 1200, 2000, 2100, 2100A, DS20, DS20E AlphaServer - Yes Yes Yes Yes+Sat Yes+Sat Yes[8] DS10/10L/20/25, 1000, 1000A AlphaServer - - Yes Yes Yes+Sat[1] Yes+Sat Yes[9] 400,800 AlphaServer - - - Yes Yes Yes+Sat - 300 AlphaStations - - - Yes Yes+Sat[10] Yes+Sat Yes[2] DEC 7000, Yes - Yes - Yes+Sat Yes - 10000 DEC 4000 - - Yes - Yes Yes+Sat - DEC 3000 - - - Yes Yes+Sat[11] Yes+Sat - 19 Ta___________________________________________________________________ Memory Multi- Chan- Host ATM,[3] Fibre  System_____________CI_______nel[1]_____DSSI____SCSI____FDDI________Ethernet Channel ___________________________________________________________________ [1]Version 7.1 and higher only. Support for Memory Channel on the GS1280 and ES47 will be announced during H1CY2003. [2]Newer AlphaStations based on DS Series servers support Fibre Channel Storage. [3]ATM using an emulated LAN configuration can be used as a cluster interconnect on all AlphaServer systems, except for AlphaServer 300 and 400 systems. ATM is not supported on the DEC Series systems listed or on VAX systems. [4]Each "Yes" means that this system is supported on this intercon- nect but cannot be booted as a satellite over this interconnect. [5]DSSI is not supported on GS, ES, of DS Series AlphaServers. [6]Each "Yes+Sat" means that this system is supported on this in- terconnect and can be booted as a satellite node over this inter- connect. [7]Excludes AlphaServer 2000, 2100, 2100A. [8]Excludes AlphaServer 1000. [9]AlphaServer 800 only. [10]Version 7.1 and higher only. Most models provide FDDI booting capability. Refer to system-specific documentation for details. [11]Using DEFTA only. 20 Ta___________________________________________________________________ Memory Multi- Chan- Host ATM,[3] Fibre  System_____________CI_______nel[1]_____DSSI____SCSI____FDDI________Ethernet Channel DEC 2000 - - - - Yes Yes+Sat - VAX 6000, Yes - Yes - Yes Yes - 7000, 10000 VAX 8xxx, Yes - - - - Yes - 9xxx, 11/xxx VAX 4xxx[12] - - Yes - Yes Yes+Sat - VAX 2xxx, - - - - - Yes+Sat - 3xxx[12] ___________________________________________________________________ [1]Version 7.1 and higher only. Support for Memory Channel on the GS1280 and ES47 will be announced during H1CY2003. [3]ATM using an emulated LAN configuration can be used as a cluster interconnect on all AlphaServer systems, except for AlphaServer 300 and 400 systems. ATM is not supported on the DEC Series systems listed or on VAX systems. [12]Some models may provide slightly different interconnect sup- port. Refer to system-specific documentation for details. ___________________________________________________________________ Alpha and Integrity Server Cluster Interconnect Support With the V8.2 release of OpenVMS Cluster Software, mixed Alpha and In- tegrity Server interconnect support is limited to a single method each for storage and SCS traffic. Alpha and I64 systems can be connected via 10/100/1000 LAN Interconnects for SCS traffic, and via Fibre Channel SAN Fabric for shared storage. Satellite Boot support is planned for a future release with I64 support. All other interconnect types are not supported on the I64 platform for OpenVMS. 21 CI Support OpenVMS Cluster systems can be configured with multiple CI adapters. Table 2 shows the types of adapters that are supported by each system. There can be only one type of adapter configured in a system (with the exception that, with OpenVMS Version 7.1, CIXCD and CIPCA adapters can be configured together in the same system). The maximum number of each type is noted in the table. The CI adapters in a system can connect to the same or different star couplers. Note: The CIBCA-A adapter cannot coexist with a KFMSA adapter on the same system. Note: The CIBCA-A and CIBCA-B are different. 22 ___________________________________________________________________ Table_2:___________________________________________________________ System - CIxxx_______750___780___BCI___BCA-A____BCA-B____XCD____PCA_________ AlphaServer - - - - - 10 10,26[1] GS, 8400 AlphaServer - - - - - - 10,26[1] 8200 AlphaServer - - - - - - 3[2] ES, 4000, 4100 AlphaServer - - - - - - 6[3] 4000 + I/O ex- pansion AlphaServer - - - - - - 3 DS, 2100A, 1200 AlphaServer - - - - - - 2[4] 2000, 2100 DEC - - - - - 10 - 7000, 10000 VAX 1 - - - - - - 11/750 VAX - 1 - - - - - 11/780, 11785 VAX 6000 - - - 1 4 4 - VAX - - 1 1 1 - - 82xx, 83xx VAX 86xx - 2 - - - - - VAX - - 1 1 2 - - 85xx, 8700, 88xx VAX 9000 - - - - - 6 - VAX - - - - - 10 - 7000, 10000 ___________________________________________________________________ [1]The two numbers represent the support limits for Version 6.2-1H3 and Version 7.1 and higher, respectively. [2]For three CIPCAs, one must be CIPCA-AA and two must be CIPCA-BA. [3]Only three can be CIPCA-AA. [4]Only one can be a CIPCA-BA. 23 ___________________________________________________________________ Observe the following guidelines when configuring CIPCA adapters: o The CIPCA adapter can coexist on a CI bus with CIXCD and CIBCA-B CI adapters and all variants of the HSC/HSJ controller except the HSC50. Other CI adapters cannot be configured on the same CI bus as a CIPCA. HSC40/70 controllers must be configured with a Revision F (or higher) L109 module. o The CIPCA-AA adapter occupies a single PCI backplane slot and a single EISA backplane slot. o The CIPCA-BA adapter occupies two PCI backplane slots. Star Coupler Expander A CI star coupler expander (CISCE) can be added to any star cou- pler to increase its connection capacity to 32 ports. The maximum number of systems that can be connected to a star coupler is 16, regardless of the number of ports. Memory Channel Support (Version 7.1 and higher only) Memory Channel is supported on all HP AlphaServer systems start- ing with the AlphaServer 1000. Observe the following rules when configuring Memory Channel: o A maximum of eight systems can be connected to a single Memory Channel interconnect. o Systems configured with Memory Channel adapters require a mini- mum of 128 megabytes of memory. o A maximum of two Memory Channel adapters can be configured in a system. Configuring two Memory Channel interconnects can im- prove the availability and performance of the cluster config- uration. Only one Memory Channel adapter may be configured in an AlphaServer 8xxx DWLPA I/O channel configured with any other adapter or bus option. This restriction does not apply to the DWLPB I/O channel, or to DWLPA I/O channels that have no other adapters or bus options. o Multiple adapters in a system cannot be connected to the same Memory Channel hub. o Memory Channel adapters must all be the same version. Specif- ically, a Memory Channel V1.5 adapter cannot be mixed with a Memory Channel V2.0 adapter within the same connection. DSSI Support Any mix of Alpha and VAX DSSI adapters can be configured on a com- mon DSSI bus (except where noted in the following list). Refer to the appropriate hardware manuals for specific adapter and configu- ration information. The following points provide general guidelines for configurations: 24 o Configure the AlphaServer systems shown in Table 1 with KFPSA (PCI to DSSI) adapters. The KFPSA is the highest-performance DSSI adapter and is recommended wherever possible. o Other supported adapters include: - KFESB (EISA to DSSI) for all AlphaServer systems except 4xxx and 8xxx models - KFESA (EISA to DSSI) for AlphaServer 2100 systems - KFMSB for Alpha XMI systems - KFMSA for VAX XMI systems - KFQSA for VAX Q-bus systems o KFMSB adapters and KFPSA adapters cannot be configured on the same DSSI bus. o Up to 24 KFPSAs can be configured on a system. o Up to 6 KFMSA/Bs can be configured on an XMI bus. o Up to 12 KFMSA/Bs can be configured in a system. o Up to four KFESBs can be configured on a system. o Up to two KFESAs can be configured on a system. o A mix of one KFESB and one KFESA can be configured on a system. o Because the DEC 4000 DSSI adapter terminates the DSSI bus, only two DEC 4000s can be configured on a DSSI. o Some of the new generation AlphaServer processors will support DSSI. The GS series and the DS20 series will have support. Other DS series and the ES series will not. Multihost SCSI Storage Support OpenVMS Cluster Software provides support for multihost SCSI con- figurations using Alpha systems and SCSI adapters, devices, and controllers. Table 1 shows which systems can be configured on a multihost SCSI bus. Any HP AlphaStation or AlphaServer system that supports optional KZPSA (fast-wide differential) or KZPBA-CB (ultrawide differen- tial; Version 7.1-1H1 and higher only) adapters can use them to connect to a multihost SCSI bus. Refer to the appropriate system documentation for system specific KZPSA and KZPBA support informa- tion. Single-host Ultra SCSI connections with either the KZPBA-CA (ultrawide single-channel adapter) or the KZPBA-CB (ultrawide dif- ferential adapter) are supported in Version 6.2-H3 and higher. Also, any AlphaStation or AlphaServer system except the AlphaServer 4000, 4100, 8200, and 8400 can use embedded NCR-810-based SCSI adapters, or on pre-EV6 hardware platforms the optional KZPAA adapters, to connect to a multihost SCSI bus. 25 Additionally, DEC 3000 systems can use optional KZTSA (fast-wide differential) adapters to connect to a multihost SCSI bus. Note: A wide range of SCSI adapters can be used to connect to a single-host SCSI bus. For further information about the complete range of SCSI support, refer to the OpenVMS Operating System for VAX and Alpha Software Product Description (SPD 25.01.xx). HP recommends optional adapters for connection to multihost buses. Use of optional adapters simplifies SCSI cabling and also leaves the embedded system adapter available for tape drives, floppies, and CD-ROMs. Multihost SCSI configurations can include DWZZA/DWZZB single-ended SCSI to differential SCSI converters. Multihost SCSI buses can be configured with any appropriately com- pliant SCSI-2 or SCSI-3 disk. Disks must support the following three features: o Multihost support o Tagged command queueing o Automatic bad block revectoring These SCSI disk requirements are fully documented in the (VMS_CLUSTER_CONFIG) manual. In general, nearly all disk drives available today, from HP or third-party suppliers, support these features. Known exceptions to the range of HP drives are the RZ25 and RZ26F, which do not support tagged command queueing. Tape drives, floppy disks, and CD-ROMs cannot be configured on multihost SCSI buses. Configure these devices on single-host SCSI buses. HSZ series storage controllers can be configured on a mulithost SCSI bus. Refer to the appropriate HSZ storage controller documen- tation for configuration information. Note that it is not possible to configure tape drives, floppy disks, or CD-ROMs on HSZ con- troller storage buses when the HSZ is connected to a multihost SCSI bus. Multihost SCSI buses must adhere to all SCSI-2 or SCSI-3 speci- fications. Rules regarding cable length and termination must be adhered to carefully. For further information, refer to the SCSI- 2 or SCSI-3 specification or the (VMS_CLUSTER_CONFIG) manual. 26 Fibre Channel Storage Support Beginning with Version 7.2-1, OpenVMS Cluster Software provides support for multihost Fibre Channel storage configurations us- ing Alpha systems and Fibre Channel adapters, switches, and con- trollers. This support is also available for Integrity servers beginning with OpenVMS Version 8.2. Direct-attached Fibre Chan- nel storage and Arbitrated Loop Fibre Channel configurations are not supported. For the current configuration guidelines and lim- itations, refer to the (VMS_CLUSTER_CONFIG) manual. This manual outlines the specific requirements for the controller (HSG80, HSG60, HSV110, MSA1000 and XP), switch, and adapter (KGPSA- **), and for the disks that can be attached to this configuration. The number of hosts, adapters, switches, and distance between these items, is constantly being increased, so refer to the manual for the up-to-date information on this evolving area. Starting with OpenVMS Version 7.2-2, SCSI tape devices can be con- nected to a Fibre Channel storage environment with the use of a Modular Data Router (MDR) or Network Storage Router (NSR) bridge products. These bridges allow these tape devices to be placed be- hind the Fibre Channel switch environment and to be shared via the same methodologies as the Fibre Channel disks in the same fabric. This support is also available for Integrity servers beginning with OpenVMS Version 8.2. Because the support for Fibre Channel is currently limited to stor- age only, a second interconnect for node-to-node communications must be present for the full clustered capability to be utilized. SOFTWARE REQUIREMENTS OpenVMS Operating System For information about OpenVMS Operating System on VAX and Version V7.3-2 and earlier on Alpha , refer to the HP OpenVMS Operating System for Alpha Version 7.3-1 and 7.3-2, and VAX Version 7.3 Soft- ware Product Description (SPD 25.01.xx). For information about OpenVMS Operating System on Integrity servers and Version 8.2 and later on Alpha, refer to HP OpenVMS for In- tegrity Servers and HP OpenVMS Alpha Version 8.2 Operating Systems Software Product Description (SPD 82.35.xx). The ability to have more than one version of OpenVMS in an OpenVMS Cluster allows upgrades to be performed in a staged fashion so that continuous OpenVMS Cluster system operation is maintained during the upgrade process. Only one version of OpenVMS can exist on any system disk; multiple versions of OpenVMS in an OpenVMS Cluster require multiple system disks. Also, system disks are architecture specific: OpenVMS Alpha and OpenVMS VAX cannot coexist on the same system disk. The coexistence of multiple versions of OpenVMS in an OpenVMS Cluster configuration is supported according to the following conditions: 27 o Warranted support is provided for mixed-architecture OpenVMS Cluster systems in which all Alpha and VAX systems are run- ning the same version of OpenVMS-Version 6.2-xxx, Version 7.0, Version 7.1-xxx, Version 7.2-xxx, or Version 7.3-xxx. o For Version 8.2, warranted support is provided for mixed- architecture clusters running Integrity servers with OpenVMS Version 8.2, and AlphaServers with OpenVMS Version 7.3-2 or Ver- sion 8.2. This warranted support of two versions on Alpha is specific to the 8.2 release of OpenVMS on Alpha and Integrity servers. For releases after OpenVMS Version 8.2, the warranted support will revert back to the single-version across all clus- ter nodes, as stated in the paragraph above. Warranted support is also provided for mixed-architecture clus- ters running OpenVMS VAX Version 7.3 and OpenVMS Alpha Version 8.2. Warranted support means that HP has fully qualified the two architectures coexisting in a OpenVMS Cluster and will answer any problems identified by customers using these configurations. o Migration support is provided for OpenVMS Cluster systems run- ning two versions of the OpenVMS operating system. These ver- sions can be: - Any mix of Version 7.3-2, Version 7.3-1, Version 7.3, Version 7.2-2, Version 7.2-1, Version 7.2-1H1, Version 7.2 - Any mix of Version 7.3-1, Version 7.3, Version 7.2-2, Version 7.2-1xx, Version 7.2, Version 7.1-2, Version 7.1-1Hx, and Version 7.1. - Any mix of Version 7.2, Version 7.1-xxx, and Version 6.2-xxx. - Any mix of Version 7.1, Version 7.0, and Version 6.2-xxx. - Any mix of Version 6.2-xxx with OpenVMS VAX Version 5.5- 2, Version 6.0, Version 6.1 and OpenVMS Alpha Version 1.5, Version 6.0, Version 6.1. Migration support means that HP has qualified the two architec- tures and versions for use together in configurations that are migrating in a staged fashion to a higher version of OpenVMS or to Alpha systems. HP will answer problem reports submitted about these configurations. However, in exceptional cases, HP may rec- ommend that you move your system to a warranted configuration as part of the solution. Note: HP does not support the use of more than two versions of OpenVMS software in the same OpenVMS Cluster at the same time. However, in many cases, running more than two versions or mixing versions not described above will operate satisfactorily. HP recommends that all Integrity server, Alpha and VAX systems in an OpenVMS Cluster run the latest version of OpenVMS. DECnet software (Alpha and VAX) 28 DECnet software is not required in an OpenVMS Cluster configura- tion. However, DECnet software is necessary for internode process- to-process communication that uses DECnet mailboxes. The OpenVMS Version 6.2-1H3 Monitor utility uses DECnet for intra- cluster communication. The OpenVMS Version 7.1 (and higher) Monitor utility uses TCP/IP or DECnet based transports, as appropriate, for intracluster communi- cation. Refer to the appropriate HP DECnet Software Product Description for further information. DECamds (Alpha and VAX) DECamds requires HP DECwindows Motif for OpenVMS. For details, refer to the DECwindows Motif for OpenVMS Software Product Descrip- tion (SPD 42.19.xx). OPTIONAL SOFTWARE For information about OpenVMS Cluster support for optional soft- ware products, refer to the OpenVMS Cluster Support section of the Software Product Descriptions for those products. Optional products that may be useful in OpenVMS Cluster systems include: o Volume Shadowing for OpenVMS (SPD 27.29.xx) o RAID Software for OpenVMS (SPD 46.49.xx) o DECram for OpenVMS (SPD 34.26.xx) o VAXcluster Console System (SPD 27.46.xx) GROWTH CONSIDERATIONS The minimum hardware and software requirements for any future ver- sion of this product may be different than the requirements for the current version. DISTRIBUTION MEDIA OpenVMS Cluster Software is distributed on the same distribution media as the OpenVMS Operating System. For more information, refer to the OpenVMS Operating System SPDs. ORDERING INFORMATION OpenVMS Cluster Software is orderable as follows: Every server (nonclient) system in an OpenVMS Cluster configuration requires: o VMScluster Software for OpenVMS I64 29 Software Licenses: Per Processor: BA412AC The VMScluster Software for OpenVMS I64 license is also included with the Mission Critical Operating Environment (MCOE) license bundle. Please refer to the HP Operating Environments for Open- VMS Industry Standard 64 for Integrity servers Software Product Description (SPD 82.34.xx) for ordering information. Software Media: Foundation Operating Environment Media, BA322AA or Enterprise Operating Environment Media, BA323AA or Mission Critical Operating Environment Media, BA324AA LMF PAK Name: VMSCLUSTER o VMScluster Software for OpenVMS Alpha - Software Licenses: QL-MUZA*-AA - Software Product Services: QT-MUZA*-** - LMF PAK Name: VMSCLUSTER Note: VMScluster Software for OpenVMS Alpha provides a unique or- dering and pricing model for single-CPU and dual-CPU capable sys- tems. Specifically, all AlphaServer DS-series systems, along with AlphaServer 800 and 1200 systems, should use the QL-MUZAC-AA li- cense order number; for service, use the corresponding QT-MUZAC-** order number. For all remaining AlphaServer systems in the Work- group system class (such as the ES40), use the standard QL-MUZAE-AA license order number; for service, use the corresponding QT-MUZAE- ** order number. VMScluster pricing and ordering for the remaining system classes of AlphaServers are unchanged. o VAXcluster Software for OpenVMS VAX - Software Licenses: QL-VBRA*-AA - Software Product Services: QT-VBRA*-** - LMF PAK Name: VAXCLUSTER OpenVMS Cluster Client Software is available as part of the NAS150 product on Alpha. It is also separately orderable for DS Series AlphaServers. It is available for any Integrity server via a Per- processor license. o VMScluster Client Software for OpenVMS Alpha - Software Licenses: QL-3MRA*-AA - Software Migration Licenses: QL-6J7A*-AA - Software Product Services: QT-3MRA*-** - LMF PAK Name: VMSCLUSTER-CLIENT o VMScluster Client Software for OpenVMS I64 Software Licenses: Per Processor: BA411AC 30 Software Media: Foundation Operating Environment Media, BA322AA or Enterprise Operating Environment Media, BA323AA or Mission Critical Operating Environment Media, BA324AA LMF PAK Name: VMSCLUSTER-CLIENT * Denotes variant fields. For additional information on available licenses, services, and media, refer to the appropriate price book. The right to the functionality of the DECamds and Availability Manager availability management software is included in all the licenses in the preceding list. DOCUMENTATION The following manuals are included in the OpenVMS hardcopy documen- tation as part of the full documentation set: o (VMS_VAXCLUSTER) o (VMS_CLUSTER_CONFIG) o (VMS_DECAMDS_UG) o (AVAIL_MANAGER_UG) Refer to the HP OpenVMS Operating System for Alpha Version 7.3-1 and 7.3-2, and VAX Version 7.3 Software Product Description (SPD 25.01.xx) or the HP OpenVMS for Integrity Servers and HP OpenVMS Alpha Version 8.2 Operating Systems Software Product Description (SPD 82.35.xx) for additional information about OpenVMS documenta- tion and how to order it. Specific terms and conditions regarding documentation on media apply to this product. Refer to HP's terms and conditions of sale, as follows: "A software license provides the right to read and print software documentation files provided with the software distribution kit for use by the licensee as reasonably required for licensed use of the software. Any hard copies or copies of files generated by the licensee must include HP's copyright notice. Customization or modifications of any kind to the software documentation files are not permitted. Copies of the software documentation files, either hardcopy or machine readable, may only be transferred to another party in con- junction with an approved relicense by HP of the software to which they relate." 31 SOFTWARE LICENSING This software is furnished under the licensing provisions of Hewlett-Packard Company's Standard Terms and Conditions. For more information about OpenVMS licensing terms and policies, contact your local HP sales office, or find HP software licensing informa- tion on the World Wide Web at: http://h18000.www1.hp.com/products/software/ info/terms/swl_sld.html License Management Facility Support The OpenVMS Cluster Software product supports the OpenVMS License Management Facility (LMF). License units for this product are allocated on an Unlimited System Use basis. For more information about the License Management Facility, re- fer to the HP OpenVMS Operating System for Alpha Version 7.3-1 and 7.3-2, and VAX Version 7.3 Software Product Description (SPD 25.01.xx) or the HP OpenVMS for Integrity Servers and HP OpenVMS Alpha Version 8.2 Operating Systems Software Product Description (SPD 82.35.xx) or documentation set. SOFTWARE PRODUCT SERVICES A variety of service options are available from HP. For more infor- mation, contact your local HP account representative or distribu- tor. Information is also available on www.hp.com/hps/software. SOFTWARE WARRANTY This software product is provided by HP with a 90-day conformance warranty in accordance with the HP warranty terms applicable to the license purchase. © 2005 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without no- tice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. 32