Set the FC or UTA/UTA2 configuration on node4
If node4 has onboard FC ports, onboard unified target adapter (UTA/UTA2) ports, or a UTA/UTA2 card, you must configure the settings before completing the rest of the procedure.
You might need to complete the Configure FC ports on node4 or Check and configure UTA/UTA2 ports on node4 section, or both sections.
If node4 does not have onboard FC ports, onboard UTA/UTA2 ports, or a UTA/UTA2 card, and you are upgrading a system with storage disks, you can skip to Verify the node4 installation. However, if you have a V-Series system or have FlexArray Virtualization Software and are connected to storage arrays, and node4 does not have onboard FC ports, onboard UTA/ UTA2 ports, or a UTA/UTA2 card, you must return to the section Install and boot node4 section and resume at Step 22. Make sure that node4 has sufficient rack space. If node4 is in a separate chassis from node2, you can put node4 in the same location as node3. If node2 and node4 are in the same chassis, then node4 is already in its appropriate rack location. |
Configure FC ports on node4
If node4 has FC ports, either onboard or on an FC adapter, you must set port configurations on the node before you bring it into service because the ports are not preconfigured. If the ports are not configured, you might experience a disruption in service.
You must have the values of the FC port settings from node2 that you saved in the section Prepare the nodes for upgrade.
You can skip this section if your system does not have FC configurations. If your system has onboard UTA/UTA2 ports or a UTA/UTA2 adapter, you configure them in Check and configure UTA/UTA2 ports on node4.
If your system has storage disks, you must enter the commands in this section at the cluster prompt. If you have a V-Series system or a system with FlexArray Virtualization Software connected to storage arrays, you enter commands in this section in Maintenance mode. |
-
Take one of the following actions:
If the system that you are upgrading… Then… Has storage disks
system node hardware unified-connect show
Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays
ucadmin show
The system displays information about all FC and converged network adapters on the system.
-
Compare the FC settings on node4 with the settings that you captured earlier from node1.
-
Take one of the following actions:
If the system that you are upgrading… Then… Has storage disks
Modify the FC ports on node4 as needed:
-
To program target ports:
ucadmin modify -m fc -t target adapter
-
To program initiator ports:
ucadmin modify -m fc -t initiator adapter
-t
is the FC4 type: target or initiator.Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays
Modify the FC ports on node4 as needed:
ucadmin modify -m fc -t initiator -f adapter_port_name
-t
is the FC4 type, target or initiator.The FC ports must be programmed as initiators. -
-
Exit Maintenance mode:
halt
-
Boot the system from LOADER prompt:
boot_ontap menu
-
After you enter the command, wait until the system stops at the boot environment prompt.
-
Select option
5
from the boot menu for maintenance mode.
-
Take one of the following actions:
If the system that you are upgrading… Then… Has storage disks
-
Skip this section and go to Verify the node4 installation if node4 does not have a UTA/UTA2 card or UTA/UTA2 onboard ports.
Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays
-
Go to Check and configure UTA/UTA2 ports on node4 if node4 has a UTA/UTA2 card or UTA/UTA2 onboard ports.
-
Skip the section Check and configure UTA/UTA2 ports on node4 if node4 does not have a UTA/UTA2 card or UTA/UTA2 onboard ports, return to the section Install and boot node4, and resume at Step 23.
-
Check and configure UTA/UTA2 ports on node4
If node4 has onboard UTA/UTA2 ports or a UTA/UTA2A card, you must check the configuration of the ports and configure them, depending on how you want to use the upgraded system.
You must have the correct SFP+ modules for the UTA/UTA2 ports.
UTA/UTA2 ports can be configured into native FC mode or UTA/UTA2A mode. FC mode supports FC initiator and FC target; UTA/UTA2 mode allows concurrent NIC and FCoE traffic to share the same 10GbE SFP+ interface and supports FC target.
NetApp marketing materials might use the term UTA2 to refer to CNA adapters and ports. However, the CLI uses the term CNA. |
UTA/UTA2 ports might be on an adapter or on the controller with the following configurations:
-
UTA/UTA2 cards ordered at the same time as the controller are configured before shipment to have the personality you requested.
-
UTA/UTA2 cards ordered separately from the controller are shipped with the default FC target personality.
-
Onboard UTA/UTA2 ports on new controllers are configured (before shipment) to have the personality you requested.
However, you should check the configuration of the UTA/UTA2 ports on node4 and change it, if necessary.
Attention: If your system has storage disks, you enter the commands in this section at the cluster prompt unless directed to enter Maintenance mode. If you have a MetroCluster FC system, V-Series system or a system with FlexArray Virtualization software that is connected to storage arrays, you must be in Maintenance mode to configure UTA/UTA2 ports. |
-
Check how the ports are currently configured by using one of the following commands on node4:
If the system… Then… Has storage disks
system node hardware unified-connect show
Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays
ucadmin show
The system displays output similar to the following example:
*> ucadmin show Current Current Pending Pending Admin Node Adapter Mode Type Mode Type Status ---- ------- --- --------- ------- -------- ------- f-a 0e fc initiator - - online f-a 0f fc initiator - - online f-a 0g cna target - - online f-a 0h cna target - - online f-a 0e fc initiator - - online f-a 0f fc initiator - - online f-a 0g cna target - - online f-a 0h cna target - - online *>
-
If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.
Contact your NetApp representative to obtain the correct SFP+ module.
-
Examine the output of the
ucadmin show
command and determine whether the UTA/UTA2 ports have the personality you want. -
Take one of the following actions:
If the CNA ports… Then… Do not have the personality that you want
Go to Step 5.
Have the personality that you want
Skip Step 5 through Step 12 and go to Step 13.
-
Take one of the following actions:
If you are configuring… Then… Ports on a UTA/UTA2 card
Go to Step 7
Onboard UTA/UTA2 ports
Skip Step 7 and go to Step 8.
-
If the adapter is in initiator mode, and if the UTA/UTA2 port is online, take the UTA/UTA2 port offline:
storage disable adapter adapter_name
Adapters in target mode are automatically offline in Maintenance mode.
-
If the current configuration does not match the desired use, change the configuration as needed:
ucadmin modify -m fc|cna -t initiator|target adapter_name
-
-m
is the personality mode, FC or 10GbE UTA. -
-t
is the FC4 type,target
orinitiator
.You must use FC initiator for tape drives, FlexArray Virtualization systems, and MetroCluster configurations. You must use the FC target for SAN clients.
-
-
Verify the settings by using the following command and examining its output:
ucadmin show
-
Verify the settings:
If the system… Then… Has storage disks
ucadmin show
Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays
ucadmin show
The output in the following examples shows that the FC4 type of adapter "1b" is changing to
initiator
and that the mode of adapters "2a" and "2b" is changing tocna
:*> ucadmin show Node Adapter Current Mode Current Type Pending Mode Pending Type Admin Status ---- ------- ------------ ------------ ------------ ------------ ------------ f-a 1a fc initiator - - online f-a 1b fc target - initiator online f-a 2a fc target cna - online f-a 2b fc target cna - online 4 entries were displayed. *>
-
Place any target ports online by entering one of the following commands, once for each port:
If the system… Then… Has storage disks
network fcp adapter modify -node node_name -adapter adapter_name -state up
Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays
fcp config adapter_name up
-
Cable the port.
-
Take one of the following actions:
If the system… Then… Has storage disks
Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays
Return to the section Install and boot node4, and resume at Step 23.
-
halt
-
boot_ontap menu
.If you are upgrading to an A800, go to Step 23
-
On node4, go to the boot menu and using 22/7, select the hidden option
boot_after_controller_replacement
. At the prompt, enter node2 to reassign the disks of node2 to node4, as per the following example.Expand the console output example
LOADER-A> boot_ontap menu . . <output truncated> . All rights reserved. ******************************* * * * Press Ctrl-C for Boot Menu. * * * ******************************* . <output truncated> . Please choose one of the following: (1) Normal Boot. (2) Boot without /etc/rc. (3) Change password. (4) Clean configuration and initialize all disks. (5) Maintenance mode boot. (6) Update flash from backup config. (7) Install new software first. (8) Reboot node. (9) Configure Advanced Drive Partitioning. (10) Set Onboard Key Manager recovery secrets. (11) Configure node for external key management. Selection (1-11)? 22/7 (22/7) Print this secret List (25/6) Force boot with multiple filesystem disks missing. (25/7) Boot w/ disk labels forced to clean. (29/7) Bypass media errors. (44/4a) Zero disks if needed and create new flexible root volume. (44/7) Assign all disks, Initialize all disks as SPARE, write DDR labels . . <output truncated> . . (wipeconfig) Clean all configuration on boot device (boot_after_controller_replacement) Boot after controller upgrade (boot_after_mcc_transition) Boot after MCC transition (9a) Unpartition all disks and remove their ownership information. (9b) Clean configuration and initialize node with partitioned disks. (9c) Clean configuration and initialize node with whole disks. (9d) Reboot the node. (9e) Return to main boot menu. The boot device has changed. System configuration information could be lost. Use option (6) to restore the system configuration, or option (4) to initialize all disks and setup a new system. Normal Boot is prohibited. Please choose one of the following: (1) Normal Boot. (2) Boot without /etc/rc. (3) Change password. (4) Clean configuration and initialize all disks. (5) Maintenance mode boot. (6) Update flash from backup config. (7) Install new software first. (8) Reboot node. (9) Configure Advanced Drive Partitioning. (10) Set Onboard Key Manager recovery secrets. (11) Configure node for external key management. Selection (1-11)? boot_after_controller_replacement This will replace all flash-based configuration with the last backup to disks. Are you sure you want to continue?: yes . . <output truncated> . . Controller Replacement: Provide name of the node you would like to replace: <nodename of the node being replaced> Changing sysid of node node2 disks. Fetched sanown old_owner_sysid = 536940063 and calculated old sys id = 536940063 Partner sysid = 4294967295, owner sysid = 536940063 . . <output truncated> . . varfs_backup_restore: restore using /mroot/etc/varfs.tgz varfs_backup_restore: attempting to restore /var/kmip to the boot device varfs_backup_restore: failed to restore /var/kmip to the boot device varfs_backup_restore: attempting to restore env file to the boot device varfs_backup_restore: successfully restored env file to the boot device wrote key file "/tmp/rndc.key" varfs_backup_restore: timeout waiting for login varfs_backup_restore: Rebooting to load the new varfs Terminated <node reboots> System rebooting... . . Restoring env file from boot media... copy_env_file:scenario = head upgrade Successfully restored env file from boot media... Rebooting to load the restored env file... . System rebooting... . . . <output truncated> . . . . WARNING: System ID mismatch. This usually occurs when replacing a boot device or NVRAM cards! Override system ID? {y|n} y . . . . Login:
In the above console output example, ONTAP will prompt you for the partner node name if the system uses Advanced Disk Partitioning (ADP) disks. -
If the system goes into a reboot loop with the message
no disks found
, it indicates that the system has reset the FC or UTA/UTA2 ports back to the target mode and therefore is unable to see any disks. To resolve this, continue with Step 17 to Step 22 or go to section Verify the node4 installation. -
Press Ctrl-C during AUTOBOOT to stop the node at the LOADER> prompt.
-
At the LOADER prompt, enter maintenance mode:
boot_ontap maint
-
In maintenance mode, display all the previously set initiator ports that are now in target mode:
ucadmin show
Change the ports back to initiator mode:
ucadmin modify -m fc -t initiator -f adapter name
-
Verify that the ports have been changed to initiator mode:
ucadmin show
-
Exit maintenance mode:
halt
-
At the LOADER prompt, boot up:
boot_ontap menu
Now, on booting, the node can detect all the disks that were previously assigned to it and can boot up as expected.
When the cluster nodes you are replacing use root volume encryption, ONTAP is unable to read the volume information from the disks. Restore the keys for the root volume.
This only applies when the root volume is using NetApp Volume Encryption. -
Return to the special boot menu:
LOADER> boot_ontap menu
Please choose one of the following: (1) Normal Boot. (2) Boot without /etc/rc. (3) Change password. (4) Clean configuration and initialize all disks. (5) Maintenance mode boot. (6) Update flash from backup config. (7) Install new software first. (8) Reboot node. (9) Configure Advanced Drive Partitioning. (10) Set Onboard Key Manager recovery secrets. (11) Configure node for external key management. Selection (1-11)? 10
-
Select (10) Set Onboard Key Manager recovery secrets
-
Enter
y
at the following prompt:This option must be used only in disaster recovery procedures. Are you sure? (y or n): y
-
At the prompt, enter the key-manager passphrase.
-
Enter the backup data when prompted.
You must have obtained the passphrase and backup data in the Prepare the nodes for upgrade section of this procedure. -
After the system boots to the special boot menu again, run option (1) Normal Boot
You might encounter an error at this stage. If an error occurs, repeat the substeps in Step 22 until the system boots normally.
-
-
If you are upgrading from a system with external disks to a system that supports internal and external disks (AFF A800 systems, for example), set the node2 aggregate as the root aggregate to ensure node4 boots from the root aggregate of node2. To set the root aggregate, go to the boot menu and select option
5
to enter maintenance mode.You must perform the following substeps in the exact order shown; failure to do so might cause an outage or even data loss. The following procedure sets node4 to boot from the root aggregate of node2:
-
Enter maintenance mode:
boot_ontap maint
-
Check the RAID, plex, and checksum information for the node2 aggregate:
aggr status -r
-
Check the status of the node2 aggregate:
aggr status
-
If necessary, bring the node2 aggregate online:
aggr_online root_aggr_from_node2
-
Prevent the node4 from booting from its original root aggregate:
aggr offline root_aggr_on_node4
-
Set the node2 root aggregate as the new root aggregate for node4:
aggr options aggr_from_node2 root
-
Verify that the root aggregate of node4 is offline and the root aggregate for the disks brought over from node2 is online and set to root:
aggr status
Failing to perform the previous substep might cause node4 to boot from the internal root aggregate, or it might cause the system to assume a new cluster configuration exists or prompt you to identify one. The following shows an example of the command output:
--------------------------------------------------------------------- Aggr State Status Options aggr 0_nst_fas8080_15 online raid_dp, aggr root, nosnap=on fast zeroed 64-bit aggr0 offline raid_dp, aggr diskroot fast zeroed` 64-bit ---------------------------------------------------------------------
-