IBM Netfinity 10/100 Fault Tolerant Adapter Ethernet Device Driver Installation README File This README file contains the latest information about installing the ethernet device drivers for the IBM Netfinity 10/100 Fault Tolerant Adapter. CONTENTS ________ 1.0 Known Problems 2.0 Change History 3.0 Installation and Configuration 3.1 OS/2 3.2 Windows NT 3.2.1 WINS configuration 3.2.2 IBM PCI Hot Plug Solution 3.2.3 DMI Support 3.3 NetWare 3.4 SCO Open Server 3.4.1 Installation Procedures 3.4.2 Driver Options 3.5 SCO UnixWare 3.5.1 Installation 3.5.2 Configuration 3.5.2.1 Standard driver 3.5.2.2 Fast EtherChannel driver 3.5.2.2.1 Configuration File Format 3.5.2.3 Dynamic Load Balancing with Port Aggregation 3.5.2.3.1 Configuration File Format 3.5.3 Limitations and Requirements 3.6 Wincenter/Winframe 3.7 Diagnostics 3.7.1 Restrictions 4.0 Web Sites and Support Phone Number 5.0 Trademarks and Notices 6.0 Disclaimer 1.0 Known Problems ____________________ o If some cases, the ethernet controller of the Netfinity 5500 does not establish good link with the hub or switch to which it is attached. This sometimes happens when all of the following conditions are true: - The ethernet controller is in auto negotiation mode (the default) - The hub or switch attached to the ethernet controller does NOT support auto negotiation - The length of cable between the ethernet controller and the hub or switch is between 35 and 42 meters In most cases, the ethernet controller can determine the line speed correctly even if the attached hub or switch does not support auto negotiation. If you are having problems use the device driver overrides to manually configure the ethernet controller mode to match the hub or switch mode. As with any adapter, if the hub or switch is configured for full duplex and does not support auto negotiation, then you must use the device driver manual overrides for proper operation. o In NetWare, proper operation of the failover function requires that adapters of the failover pair use the same interrupt. See the NetWare section below for additional details. o During driver installation for NetWare using the LDI file, the AUTOEXEC.NCF file will not be updated if driver load statements from a previous installation are present in the file. The AUTOEXEC.NCF file can be edited manually to make the necessary changes. o Under SCO Open Server there is a bug during driver removal when using netconfig. Users have to manually remove the pnt directory under /etc/conf/pack.d. Otherwise, the next installation of pnt driver will not work. 2.0 Change History _____________________ Changes made in this diskette, version 3.00: - Windows NT driver version 4.14 (WINNT directory) Note: The driver file itself has not changed on this diskette image. Some of the supporting files have changed to enable the changes below. 1. Support for 802.1p protocol has been added. The DMI application has been enhanced to support QOS (Quality of Service) features. 2. Fixed issue with opening driver properties box, not making any changes, and having NT ask if you would like to restart the system. - OS/2 driver version 4.07 (root directory) 1. Slow network performance when using IRQ sharing has been fixed. 2. After a few hours of heavy stress a drastic reduction in data transmission was observed. This problem has now been fixed. 3. The maximum number of TransmitBuffers that the driver can support has now been increased from 16 to 32. - NetWare driver version 4.22 (NOVELL directory) 1. Support added for AMD PCnet-FAST-PRO and PCnet-FX Fiber Channel devices. - SCO Unix OpenServer 5.0x driver version 4.04 (SCOUNIX.50 directory) 1. Fixed an installation bug in the previous version. 2. Added support for AMD PCnet-Pro adapter. - SCO Unixware 7.x driver version 2.2.2 (UNIXWARE directory) 1. Support for UnixWare 7.1 added. --------------------------------------------------------------- Changes made in diskette version 2.10: - Windows NT driver version 4.14 (WINNT directory) 1. Fixed failover bug when system boots with primary link already disabled. 2. Fixed Ownermap registry bug which affected could have affected some hot plug configurations with Active PCI 4.1 and above. ------------------------------------------------------------------ Changes made in diskette version 2.09: - OS/2 driver version 4.06 (root directory) 1. Increased the default transmit buffers to 16 to correct a server overload problem under heavy stress. - NetWare driver version 4.18 (NOVELL directory) 1. Fixed a problem of abnormally long failback to primary adapter if primary adapter was higher in the PCI scan order than the secondary adapter. - Windows NT driver version 4.12 (WINNT directory) 1. Version number was incremented to 4.12. -------------------------------------------------------------------- Changes made in diskette version 2.08a: - Windows NT driver version 4.12 (WINNT directory) 1. The driver is unchanged but the PCNETFO.EXE file has been changed to fix a bug. The bug caused network traffic to stop after failback to the primary adapter in some configurations. -------------------------------------------------------------------- Changes made in diskette version 2.08: - Windows NT driver version 4.12 (WINNT directory) 1. No longer causes an error message in the NT Event Viewer when the driver is started without DMI software loaded. 2. There are no longer any slot restrictions when setting up failover pairs. - DMI Instrumentation changes 1. The startup mode for DMI instrumentation is set to automatic when the driver is installed, but only if the properties box "Enable for DMI/Hot Swap Support" is checked. If the box is unchecked, the DMI instrumentation service is disabled. - SCO Unix OpenServer 5.0x driver version 4.03 (SCOUNIX.50 directory) 1. Corrected problem running on SMP machines. - SCO Unixware 7.x driver version 2.2.0 (UNIXWARE directory) 1. Added driver to diskette. 3.0 Installation and Configuration *Note: If installing from an update package, see additional instructions, in Appendix A. ____________________________________ 3.1 OS/2 ---------- The OS/2 device driver and associated files reside in the root directory. The OS/2 device driver can handle up to 4 ethernet controllers. These controllers can be configured as any combination of individual controllers and redundant pairs. Each adapter can only be part of 1 redundant pair. A redundant pair of network adapters can be defined such that a loss of good link status on the primary adapter will cause all ethernet traffic on this link to be automatically switched to the standby adapter. If the link for the primary adapter is restored, the sessions on the standby adapter will automatically switch back to the primary adapter. To enable failover operation, run the MPTS program and edit the AMD device driver parameters. Set the PermaNet Server feature to TRUE. Enter the parameters for the Primary and Standby slots. The following table gives the slot number for various Netfinity servers: Machine Slot ------- ---- Netfinity 5000 9 Netfinity 5500 E Netfinity 5600 2 To have a message written into IBMCOM\LANTRAN.LOG whenever a failover occurs, copy the file PCNETOS2.EXE onto the hard disk and edit the CONFIG.SYS file. Add the statement: RUN=c:\path\PCNETOS2.EXE to the beginning of the file, where c is the hard disk and path is the directory path that you copied the PCNETOS2.EXE file into. Once the machine is rebooted with the PCNETOS2.EXE statement active, PCNETOS2.EXE will be active and cannot be overwritten. To update the PCNETOS2.EXE file in the future, the user must first remove the above statement from the CONFIG.SYS file and reboot the machine. 3.2 Windows NT ---------------- The Windows NT device driver and associated files reside in the A:\WINNT subdirectory. The driver under the NT subdirectory supports the failover function. A redundant pair of network adapters can be defined such that a loss of good link status on the primary adapter will cause all ethernet traffic on this link to be automatically switched to the secondary adapter. Two options are available for recovering from a failover condition. The option is determined by a check box in the adapter configuration panel. If the IBM Netfinity Hot Plug PCI for Windows NT 4.0 package is installed on the machine, an extra box will appear on the configuration panel called "Enable for DMI / Hot Swap Support". Users who do not have the IBM Netfinity Hot Plug PCI for Windows NT 4.0 package installed on their server will not see the "Enable for DMI / Hot Swap Support" box in the panel. NOTE: The order of installation is important. The IBM Netfinity Hot Plug PCI for Windows NT 4.0 package must be installed before the IBM Netfinity 10/100 Fault Tolerant Adapter driver is installed. If the adapter device driver is installed before the Hot Plug package, the adapter device driver will not see the Hot Plug code. This happens because the adapter device driver only checks the NT registry for the Hot Plug package during installation. If the Hot Plug package is added after the adapter device driver is installed, the adapter must be removed and re-added in order for it to detect the Hot Plug package. If "Enable for DMI / Hot Swap Support" is not checked or is not present at all, traffic will automatically switch back to the primary adapter when the primary link status is restored. In this mode, hot swap of the adapter is not possible. Users with the IBM Netfinity Hot Plug PCI for Windows NT 4.0 package installed should check the "Enable for DMI / Hot Swap Support" box. If the box marked "Enable for DMI / Hot Swap Support" is checked, traffic will remain on the secondary adapter until the user directs it to return to the primary adapter. This can be done after the hot swap replacement of the primary adapter or by using DMI services. The NT device driver can handle up to 4 ethernet controllers. Two of these controllers can be configured as a redundant pair. To enable failover operation, select Settings from the Start menu. Then select Control Panel, then Network, then Adapters. Highlight the AMD PCnet PermaNet Server LFT Adapter and press the Properties button. Check the Grouping box and designate the primary and secondary adapters. The onboard controller location is given below for various servers: Machine Location ------- -------- Netfinity 5000 Bus 0, slot 9 Netfinity 5500 Bus 0, slot 14 Netfinity 5600 Bus 0, slot 2 If a failover occurs, a message will be written to into the NT Event Viewer and a DMI alert will be generated. 3.2.1 WINS Configuration ------------------------ To install WINS server when a failover pair is installed on the system, follow these steps: 1. Install the NT driver. In the properties dialog box for the adapter, enable grouping and configure the 2 adapters as a failover pair. Note the primary adapter (PCNTN4M1 or PCNTN4M2). 2. Install the WINS service. 3. In the TCP/IP properties dialog box, specify the IP address for the primary adapter. For the secondary adapter, specify a dummy IP address such that it has the same netid as the primary IP address. For instance, if the subnet mask is 255.0.0.0, then if the primary IP address is 10.10.10.1, the secondary IP address can be 10.200.10.200. If the primary IP address is 139.92.12.1, the secondary IP address can be 139.95.13.200 4.In the TCP/IP properties dialog box, on the WINS Address page, select primary adapter. Specify the same address for both the primary and the secondary WINS server. This address should be the IP address of the primary adapter. Leave the WINS entries blank for the secondary adapter. 5. Under bindings, select "All Protocols" from the list box. Under WINS Client(TCP/IP), make sure that the primary adapter is placed higher than the secondary adapter. The "move up" or "move down" button can be used to ensure this. Now select the secondary adapter and click on the disable button. This will disable the WINS binding with the secondary adapter. Close the applet and reboot the server for the changes to take effect. 3.2.2 IBM PCI Hot Plug Solution ------------------------------- The WINNT subdirectory includes the device driver files that support the IBM PCI Hot Plug PCI for Windows NT 4.0 package. The IBM PCI Hot Plug PCI for Windows NT 4.0 package ensures high availability on PCI Hot Plug-compatible IBM servers. With IBM PCI Hot Plug, you can Hot Add to install and configure a new adapter while the system is running, Hot Swap to replace a faulty redundant adapter while the System is running, and monitor the status of Hot Plug PCI slots on your server with the IBM PCI Hot Plug Applet. Note that Hot Plug and Hot Swap are separate services in Windows NT. You cannot Hot Swap an adapter using the Hot Plug service. In order to use Hot Plug, it is necessary to install the Intel DMI SDK (see www.dmtf.org for more information) and the IBM PCI Hot Plug software (available at www.ibm.com/pc/support/netfinity). The DMI software must be installed before Hot Plug or Hot Swap will work. After a Hot Add of a NIC Card, the MAC addresses are returned as all zeros (000000000000). This does not affect the abilities of the adapter. The Hot Remove feature is NOT supported. It is highly recommended to perform any of the PCI Hot Plug features during low traffic. After a Hot Add of an adapter, the Windows NT applications (Network Applet and SCSI Applet) will ask the user for a Reboot. No reboot is necessary! 3.2.3 DMI Support ----------------- DMI is supported under Windows NT. The device driver component, PCNET.MIF, and the instrumentation code, PCNETFO.EXE are located in the A:\WINNT subdirectory. To install DMI support under Windows NT, install the Intel SDK. Then install a DMI browser of your choice. Information on these items is available at www.dmtf.org. The files from the A:\WINNT subdirectory will be copied onto your hard disk during installation. The PCNETFO.EXE instrumentation code will be started in the background each time the machine is rebooted. Open the DMI browser and follow the steps necessary to add the PCNETFO.MIF component. Once the AMD PCNet component is installed, you can select it and view any parameter with the browser. NOTE: If more than 1 AMD controller is located in the system, the browser will only show values for one of them. 3.3 NetWare ------------- The IntranetWare device driver and associated files reside in the A:\NOVELL subdirectory. The NetWare 3.X driver is found in the A:\NOVELL\VER3_X subdirectory. NOTE: The rest of the discussion in this section applies only to IntranetWare and later systems. If you are using NetWare 4.11, please have Service Pack 8a installed on your system or if you are using NetWare 5.1 please install Service Pack 1. These Service Packs may be downloaded from Novell's website at www.novell.com. The NetWare device driver can handle up to 4 ethernet controllers. These controllers can be configured as any combination of individual controllers and redundant pairs. Each adapter can only be part of 1 redundant pair. The NetWare driver complies with the ODI 3.31 specification and requires the following minimal level of NetWare load modules for proper operation: MSM.NLM version 3.80a ETHERTSM.NLM version 3.67a Updated versions of these modules are available on this diskette. The ethernet controller supports auto-negotiation on the ethernet link. To override auto-negotiation and manually specify the mode, load the driver with the appropriate parameter as follows: LOAD PCNTNW LINESPEED=x where x is either 10H, 10F, 100H, 100F, or AUTO. 10H = 10 Mbits/sec, half duplex 10F = 10 Mbits/sec, full duplex 100H = 100 Mbits/sec, half duplex 100F = 100 Mbits/sec, full duplex AUTO = auto negotiation The NetWare driver supports both hot plug and failover operation. See the note below for important configuration requirements. To enable failover operation from the console prompt, enter the following command: LOAD PCNTNW PRIMARY=d SECONDARY=e where d is the slot number of the primary adapter and e is the slot number of the secondary adapter. The slot number of the onboard adapter can vary depending on the configuration of the machine. To determine the onboard slot number, load the driver with no parameters. You will be prompted with a list of slot numbers to choose from. The onboard adapter will have a slot number greater than or equal to 10,000. Once the slot number is known, unload the driver and reload it using the failover parameters described above. A failover from the primary to the secondary will occur if a link failure condition is detected on the primary adapter. The failover status can be viewed in the custom counts fields of Monitor. If the link for the primary adapter is restored, the sessions on the secondary adapter will automatically switch back to the primary adapter. If a hot replace of the primary adapter occurred while traffic was being handled by the secondary adapter, an automatic failover back to the primary adapter will not occur once the primary link is restored. In this case use the following command to return the traffic to the primary controller: LOAD PCNTNW SCAN NOTE: The IBM Netfinity 5000 does not support hot plug operations on any slots, but does support the failover feature. For the failover feature to operate correctly the primary and secondary ethernet controllers must share the same interrupt. Interrupt assignments can be configured manually using the BIOS utilities which are available by pressing the F1 key during system booting. Hot added adapters should not require manual interrupt assignments. A hot added adapter will automatically be assigned the same interrupt as an identical adapter located in the machine at power up time. A hot added IBM 10/100 Fault Tolerant Adapter will adopt the same interrupt assignment as the on-board controller. Some .DSK drivers are not compatible with drivers written to the ODI 3.31 specification. In some cases, the failover feature will not operate correctly if the adapter associated with the .DSK driver shares the same interrupt as one of the ethernet controllers of the failover pair. It is recommended that the primary and secondary ethernet controllers do not share their interrupt with any other adapter or device, though this is not necessary in all cases. 3.4 SCO Open Server --------------------- The SCOUNIX files reside in the A:\SCOUNIX.50 subdirectory. The file PNT.TAR contains the archive of all the files required to build the driver.o file. 3.4.1 Installation Procedures ----------------------------- For single processor systems: ----------------------------- 1. Ensure that the pnt driver can be installed from the CDROM using "netconfig". 2. Remove it, relink kernel and reboot. 3. Go to the directory /opt/K/SCO/pnt/5.0.5a/ID/pnt, remove and save the contents. Note: Depending on the OpenServer version, you may need to change the '5.0.5a' directory name in the above path. 4. Run the following steps. ndinstall -d pnt doscp a:\scounix.50\pnt.tar pnt.tar tar xvf pnt.tar >/dev/null 2>&1 rm pnt.tar ndinstall -a pnt 5. Relink and reboot. 6. Now install drivers for the AMD PCNet cards using netconfig menu options. For systems with multiple processors: -------------------------------------- Before installing the drivers, install an additional processor on the system (which already has SCO openserver 5.0.5a on it) by running CUSTOM menu options. Select SMP MultiProcessor installation and add the additional processor by entering the license number when prompted. Repeat the steps for each additional processor on the system. Relink kernel and reboot. Now follow the same steps mentioned above (for single processor systems) to install the pnt driver. Note: If there are warning messages when the driver tries to register its IRQs do the following: a. Run netconfig and remove the driver, link and reboot. b. For a pci device check the func#, dev# and bus# by "/etc/hw -vr pci" c. Now add the driver again by running netconfig. d. Ensure the above parameters match. The following values are correct for the onboard controller of the IBM Netfinity 5000: PNT_0_PCI_BUS 0 PNT_0_PCI_DEV 9 PNT_0_PCI_FUNC 0 Make sure they are not -1. The following values are correct for the onboard controller of the IBM Netfinity 5500: PNT_0_PCI_BUS 0 PNT_0_PCI_DEV 14 PNT_0_PCI_FUNC 0 Make sure they are not -1. The following values are correct for the onboard controller of the IBM Netfinity 5600: PNT_0_PCI_BUS 0 PNT_0_PCI_DEV 2 PNT_0_PCI_FUNC 0 Make sure they are not -1. 3.4.2 Driver Options --------------------- o Override auto-negotiation and automatic network port selection via keywords provided in space.c - To select a connecting medium, make only one variable from AUI, (internal) 10BaseT, or MII equal to '1'. If AUI, 10BaseT, and MII are all '0', then automatic port selection is activated. Only the MII port is supported on the IBM Netfinity 5500 and this port will be selected if automatic port selection is enabled. - To select 100 Mbits speed, set the variable SPEED100 to '1' in the space.c file. Set it to '0' to select 10 Mbits speed. - To select full duplex mode, set FULLDUP to '1' in the space.c file. Set FULLDUP to '0' for half duplex mode. o LED programming via keyword provided in space.h - The LED keyword is provided for knowledgeable users who want to override the default settings. If the user provides specific values for the LEDs in the driver's space.h file, then these values override the factory default values. The user specified values are then reflected by the adapter's LEDs. Go to the directory /etc/conf/pack.d/pnt/ in the system where the pnt driver config files are located. The space.h files contain the driver LED values. If the values are all 0xFFFFFFFF then the driver takes the default LED values from the EEPROM. The user can edit the space.h file to reflect the desired LED values. After the change is made, link the kernel by the command /etc/conf/cf.d/link_unix Reboot and when the system comes up the new LED value is reflected by the LEDs. 3.5 SCO UnixWare ----------------- The files required for UnixWare 7 reside in the A:\UNIXWARE subdirectory. The UnixWare driver is named pnt2. It can be configured to run in the following modes: o Standard driver o Dynamic Load Balancing & Port Aggregation driver o Fast EtherChannel mode driver This release consists of two packages: 1. Pnt2: This package contains the driver files 2. Pnt2fec: This package contains installation support files for Fast EtherChannel or Dynamic Load Balancing with Port Aggregation. This package requires Perl programming language of 5.0 or above to be installed on your system. Note: FEC stands for Fast EtherChannel DLB stands for Dynamic Load Balancing PAg stands for Port Aggregation 3.5.1 Installation ------------------ To install these packages, untar the pnt_driv.tar file into a temporary directory and execute the pkgadd command there. After the pkgadd command completes, use the netcfg utility to configure the ethernet controllers. Example: cd /tmp tar xvf pnt_driv.tar pkgadd -d `pwd` (pwd is /tmp) netcfg 3.5.2 Configuration ------------------- 3.5.2.1 Standard driver ----------------------- 1. Run netcfg menu utility to configure the driver. 2. Select the AMD PCnet Fast EtherChannel Driver and add the protocols as for standard driver. 3.5.2.2 Fast EtherChannel driver -------------------------------- This driver supports Fast EtherChannel configuration with load balancing and link failover features. These extra features require Fast EtherChannel-compliant switch like Cisco Systems Cisco Catalyst 5000 or Cisco 2900 series. Refer to the switch documentation for the Fast EtherChannel Configuration on the switch. For Fast EtherChannel configuration on your server you need to use a specific procedure described below for proper configuration. A group of adapters, constituting a single logical channel is called a trunk. The operating systems is not aware of all the adapters in a trunk and can see only one device in a group which is called trunk master - this is the device configured by the netcfg utility. All other adapters in the trunk share the same MAC address and are called slave devices. All trunk master devices should be properly configured with netcfg. Slave devices are attached to corresponding master devices during system boot time using information specified in trunk configuration file etc/etherchannel/ethertrunk.conf. All EtherChannel configuration is done in terms of PCI slots used by PCnet cards. Due to a bug in the netcfg implementation, specific steps should be performed to configure systems with several PCnet-fast adapters. 1. Use netcfg to configure trunk master device, select the EtherChannel (FEC/DLB and PAg) driver and add the protocols. 2. Run the command resmgr | grep 10222000 to see all available PCnet cards in the system. The 3-d entry from the end will be PCI slot number and the first entry will be resource database key. 3. Use resmgr -r -k command to remove each entry not configured by netcfg. 4. Run the command /etc/conf/bin/idconfupdate to permanently save resource manager information. 5. Edit the file /etc/etherchannel/etherchannel.conf and put a list of devices (including master and all slaves) for each trunk. A sample file called /etc/etherchannel/etherchannel.conf.sample is provided as an example. 6. Reboot the system. 7. Use resmgr utility to check your configuration. 8. Configure the switch to support Fast EtherChannel configuration. 9. Spanning Tree has to be disabled on the ports configured for Fast EtherChannel. If VLANs are configured, care should be taken that all ports in the EtherChannel have to be in the same VLAN. Also, the regular pnt driver and the pnt2 driver cannot be used together on the server. 10. Connect all the adapters in the trunk to the switch. 11. Reboot the system. The file /etc/.osm should reflect current trunk configuration. Note: When any trunks are configured, the driver will use syslog facility to log information about links going up and down. The driver will only detect status of used links. It will not show the status change of any link that is not currently in use by a driver. 3.5.2.2.1 Configuration File Format ----------------------------------- The configuration of Fast EtherChannel groups is specified in the file /etc/etherchannel/ethertrunk.conf. The sample configuration file /etc/etherchannel/ethertrunk.conf.sample is provided for references. This file contains description of each trunk, one per line. Each line consist of pairs of the type keyword = value, separated by semicolons. Lines started with hash sign or semicolon are treated as comments. Recognized keywords are: trunk: An integer, specifying trunk number. It is a required keyword. Trunks should be enumerated from 0 to the maximum of 8. name: A string, specifying trunk name. slot: An integer, specifying the PCI slot number of the adapter to be included in the trunk. All slots should be specified on the same line. All specified slots will be gathered in one Fast EtherChannel trunk. Example configuration file: # Example entries. Do not use as is!! Edit for your hardware # configuration! # Define two trunks, two cards in each. # One trunk has cards in PCI slots 5 and 9, another trunk has cards # in PCI slots 6 and 8. # trunk=0: name = t0; slot = 5, slot = 9; trunk=1: name = t1; slot = 6, slot = 8; 3.5.2.3 Dynamic Load Balancing with Port Aggregation ---------------------------------------------------- For configuring the driver for dynamic load balancing and port aggregation follow these steps. The switch to which the server is connected may be a non-FEC aware switch. Up to 4 adapters may be configured to form a trunk. In this case there is maximum of 400 Mbps bandwidth from server to the switch whereas there is a 100 Mbps bandwidth from switch to server if a 10/100 Mbps switch is used. A group of adapters, constituting a single logical channel is called a trunk. The operating systems is not aware of all the adapters in a trunk and can see only one device in a group which is called trunk master - this is the device configured by the netcfg utility. All other adapters in the trunk share the same MAC address and are called slave devices. All trunk master devices should be properly configured with netcfg. Slave devices are attached to corresponding master devices during system boot time using information specified in trunk configuration file etc/etherchannel/ethertrunk.conf. All EtherChannel configuration is done in terms of PCI slots used by PCnet cards. Due to a bug in the netcfg implementation, specific steps should be performed to configure systems with several PCnet-fast adapters. 1. Use netcfg to configure trunk master device, select the EtherChannel (FEC/DLB and PAg) driver and add the protocols. 2. Run the command resmgr | grep 10222000 to see all available PCnet cards in the system. The 3-d entry from the end will be PCI slot number and the first entry will be resource database key. 3. Use resmgr -r -k command to remove each entry not configured by netcfg. 4. Run the command /etc/conf/bin/idconfupdate to permanently save resource manager information. 5. Edit the file /etc/etherchannel/etherchannel.conf and put a list of devices (including master and all slaves) for each trunk. A sample file called /etc/etherchannel/etherchannel.conf.sample is provided as an example. 6. Reboot the system. 7. Use resmgr utility to check your configuration. 8. Spanning Tree has to be disabled on the ports configured for DLB/PAg. If VLANs are configured, care should be taken that all ports in the DLB/PAg have to be in the same VLAN. Also, the regular pnt driver and the pnt2 driver cannot be used together on the server. 9. Connect all the adapters in the trunk to the switch. 10. Reboot the system. The file /etc/.osm should reflect current trunk configuration. Note: When any trunks are configured, the driver will use syslog facility to log information about links going up and down. The driver will only detect status of used links. It will not show the status change of any link that is not currently in use by a driver. 3.5.2.3.1 Configuration File Format ----------------------------------- The configuration of Fast EtherChannel groups is specified in the file /etc/etherchannel/ethertrunk.conf. The sample configuration file /etc/etherchannel/ethertrunk.conf.sample is provided for references. This file contains description of each trunk, one per line. Each line consist of pairs of the type keyword = value, separated by semicolons. Lines started with hash sign or semicolon are treated as comments. Recognized keywords are: trunk: An integer, specifying trunk number. It is a required keyword. Trunks should be enumerated from 0 to the maximum of 4. name: A string, specifying trunk name. slot: An Integer, specifying the PCI slot number of the adapter to be included in the trunk. All slots should be specified on the same line. All specified slots will be gathered in one trunk. Example configuration file: # Example entries. Do not use as is!! Edit for your hardware # configuration! # Define two trunks, two cards in each. # One trunk has cards in PCI slots 5 and 9, another trunk has cards # in PCI slots 6 and 8. # trunk=0: name = t0; slot = 5, slot = 9; trunk=1: name = t1; slot = 6, slot = 8; 3.5.3 Limitations and Requirements ---------------------------------- 1. Spanning tree must be turned OFF on the ports connected to the trunk cards. 2. All the ports involved in the FEC or DLB/PAg have to be in the same VLAN. 3. Perl programming language of 5.0 or above has to be installed on the system. 4. The AMD PCnet PCI Fast, Fast+ or Fast III are supported. 5. The PNT2 driver replaces the original AMD PNT driver. 6. PNT2 driver can not be used together with the old PNT driver. The regular PNT driver cannot be used along with the PNT2 EtherChannel (FEC/DLB and PAg) driver. New driver should be used for all AMD PCnet network adapters! 7. If there are loading problems in the UnixWare 7.1 release, comment out the line #include in the etc/inst/nd/mdi/pnt2/Space.c file and try loading again. 3.6 Wincenter/Winframe ----------------------- The files required for Winframe 1.7a, Wincenter Connect 3.1, and Wincenter Pro 3.1 reside in the A:\WINCTR subdirectory. 3.7 Diagnostics ---------------- The Diagnostic files reside in the A:\DIAG subdirectory. Please read the "Restrictions" section below before starting test. PCnet Diagnostic Utility Release 2.1 01/27/98 The elements of this diagnostic are: o Resources tests o Internal Loopback test o External Loopback test o Link test as Sender or Responder To execute the Test: o Copy both executables (AMDDIAG.EXE and ND_MAIN.EXE) to same sub-directory. o At DOS prompt, just type AMDDIAG. To generate a .LOG file for tests summary, then type AMDDIAG /log. o Follow the instruction on the screen to execute specific tests. o For tests summary, edit AMDDIAG.LOG at DOS prompt. o NOTE: the second log file generated by the test (ND_MAIN.LOG) is intended for debugging purpose and not recommended for end users. o Setup instructions to run Link Test - Two systems connected peer to peer. Run "Link Test as Responder" on one system. Wait for the "<==Responding" message to appear. Then run "Link Test as Sender" on the other system. Status of Pass / Fail will appear on the system running "Link Test as Sender". 3.7.1 Restrictions: ------------------- o The "Run ALL Tests" option will only execute the Resources tests, the Internal Loopback tests, and the External Loopback tests only. o When Link test is executed, the Link test as RESPONDER machine must be run first, wait for the "<== Responding" message, then run the Link Test as SENDER machine. o The Link test as RESPONDER machine can be terminated with a hit of any key on the keyboard. o A Loopback Plug is required to run the External Loopback Test. o External loopback test is not available for PCnet Fast and PCnet Fast+ adapters that are forced to a link speed of 10Mbs. This is when the adapter is connected to a 10Mb Hub or a loopback plug is not connected. 4.0 WEB Sites and Support Phone Number ________________________________________ IBM Support Web Site: http://www.pc.ibm.com/support IBM Marketing Netfinity Web Site: http://www.pc.ibm.com/netfinity If you have any questions about this update, or problems applying the update go to the following Help Center World Telephone Numbers URL: http://www.pc.ibm.com/qtechinfo/YAST-3P2QLY.html. 5.0 Trademarks and Notices ____________________________ The following terms are trademarks of the IBM Corporation in the United States or other countries or both: IBM OS/2 Netfinity Microsoft and Windows NT are trademarks or registered trademarks of Microsoft Corporation. AMD, PCNet, and PermaNet are trademarks or registered trademarks of Advanced Micro Devices, Inc. NetWare and IntranetWare are registered trademarks of Novell Corporation. SCO is a registered trademark of Santa Cruz Operations. Cisco and Fast EtherChannel are trademarks or registered trademarks of Cisco Systems Corporation. Intel is a registered trademark of Intel Corporation. Winframe is a registered trademark of Citrix Corporation. Wincenter Connect and Wincenter PRO are trademarks or registered trademarks of NCD, Inc. Other company, product, and service names may be trademarks or service marks of others. 6.0 Disclaimer _______________ THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS. Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Appendix A: Package Specific Installation Instruction This update is packaged as a self-extracting Package for the web (PFW). To unpack, this update requires that your TEMP environment variable be set to a path with read/write access. You must be logged in as an administrator. The command-line syntax for Package-For-The-Web Driver update package is: Package.exe [-s] [-a [-s] | [-x directory] | [-?] ] [-s] This initial -s tells the Package-For-The-Web software to install silently and will not prompt if files need to be over-written in the %temp% directory. [-a] Tells the Package-For-The-Web software to pass all subsequent commands to the install package (i.e. the update). [-s] The second -s option indicates to run the update silently and unattended. For firmware updates the update is scheduled to run on the next reboot. An immediate reboot can be forced with the -r option. [-x directory] Use with firmware updates to extract the update to directory named directory. Since the Package-For-The-Web extracts itself to subdirectory in the %TEMP% directory, a relative directory will be relative to that location. Normally the user will want to specify an absolute directory. [-?] Display information about the command line switches. Only the apply-update-silently (-s) option is necessarily unattended, that is the other command line options such as display help (-?) may require the user to "hit any key" to continue. If Windows packages are run without any command line options, a GUI is displayed. This GUI offers all the options available at the command line, except that the -w option is not available.