Skip to main content

Set the FC or UTA/UTA2 configuration on node4

Contributors netapp-pcarriga netapp-aoife netapp-aherbin

If node4 has onboard FC ports, onboard unified target adapter (UTA/UTA2) ports, or a UTA/UTA2 card, you must configure the settings before completing the rest of the procedure.

About this task

You might need to complete the Configure FC ports on node4 or Check and configure UTA/UTA2 ports on node4 section, or both sections.

Note If node4 does not have onboard FC ports, onboard UTA/UTA2 ports, or a UTA/UTA2 card, and you are upgrading a system with storage disks, you can skip to the Map ports from node2 to node4 section.
However, if you have a V-Series system or have FlexArray Virtualization Software and are connected to storage arrays, and node4 does not have onboard FC ports, onboard UTA/ UTA2 ports, or a UTA/UTA2 card, you must return to the section Install and boot node4 and resume at Step 22. Make sure that node4 has sufficient rack space. If node4 is in a separate chassis from node2, you can put node4 in the same location as node3. If node2 and node4 are in the same chassis, then node4 is already in its appropriate rack location.

Configure FC ports on node4

If node4 has FC ports, either onboard or on an FC adapter, you must set port configurations on the node before you bring it into service because the ports are not preconfigured. If the ports are not configured, you might experience a disruption in service.

Before you begin

You must have the values of the FC port settings from node2 that you saved in the section Prepare the nodes for upgrade.

About this task

You can skip this section if your system does not have FC configurations. If your system has onboard UTA/UTA2 ports or a UTA/UTA2 adapter, you configure them in Check and configure UTA/UTA2 ports on node4.

Important If your system has storage disks, you must enter the commands in this section at the cluster prompt. If you have a V-Series system or a system with FlexArray Virtualization Software connected to storage arrays, you enter commands in this section in Maintenance mode.
Steps
  1. Take one of the following actions:

    If the system that you are upgrading…​ Then…

    Has storage disks

    system node hardware unified-connect show

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    ucadmin show

    The system displays information about all FC and converged network adapters on the system.

  2. Compare the FC settings on node4 with the settings that you captured earlier from node1.

  3. Take one of the following actions:

    If the system that you are upgrading…​ Then…

    Has storage disks

    Modify the FC ports on node4 as needed:

    • To program target ports:

      ucadmin modify -m fc -t target adapter

    • To program initiator ports:

      ucadmin modify -m fc -t initiator adapter

    -t is the FC4 type: target or initiator.

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    Modify the FC ports on node4 as needed:

    ucadmin modify -m fc -t initiator -f adapter_port_name

    -t is the FC4 type, target or initiator.

    Note The FC ports must be programmed as initiators.
  4. Exit Maintenance mode:

    halt

  5. Boot the system from LOADER prompt:

    boot_ontap menu

  6. After you enter the command, wait until the system stops at the boot environment prompt.

  7. Select option 5 from the boot menu for maintenance mode.

  1. Take one of the following actions:

    If the system that you are upgrading…​ Then…​

    Has storage disks

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    • Go to Check and configure UTA/UTA2 ports on node4 if node4 has a UTA/UTA2 card or UTA/UTA2 onboard ports.

    • Skip the section Check and configure UTA/UTA2 ports on node4 if node4 does not have a UTA/UTA2 card or UTA/UTA2 onboard ports, return to the section Install and boot node4, and resume at Step 23.

Check and configure UTA/UTA2 ports on node4

If node4 has onboard UTA/UTA2 ports or a UTA/UTA2A card, you must check the configuration of the ports and configure them, depending on how you want to use the upgraded system.

Before you begin

You must have the correct SFP+ modules for the UTA/UTA2 ports.

About this task

UTA/UTA2 ports can be configured into native FC mode or UTA/UTA2A mode. FC mode supports FC initiator and FC target; UTA/UTA2 mode allows concurrent NIC and FCoE traffic to share the same 10GbE SFP+ interface and supports FC target.

Note NetApp marketing materials might use the term UTA2 to refer to CNA adapters and ports. However, the CLI uses the term CNA.

UTA/UTA2 ports might be on an adapter or on the controller with the following configurations:

  • UTA/UTA2 cards ordered at the same time as the controller are configured before shipment to have the personality you requested.

  • UTA/UTA2 cards ordered separately from the controller are shipped with the default FC target personality.

  • Onboard UTA/UTA2 ports on new controllers are configured (before shipment) to have the personality you requested.

However, you should check the configuration of the UTA/UTA2 ports on node4 and change it, if necessary.

Warning Attention: If your system has storage disks, you enter the commands in this section at the cluster prompt unless directed to enter Maintenance mode. If you have a MetroCluster FC system, V-Series system or a system with FlexArray Virtualization software that is connected to storage arrays, you must be in Maintenance mode to configure UTA/UTA2 ports.
Steps
  1. Check how the ports are currently configured by using one of the following commands on node4:

    If the system…​ Then…

    Has storage disks

    system node hardware unified-connect show

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    ucadmin show

    The system displays output similar to the following example:

    *> ucadmin show
                    Current  Current    Pending   Pending   Admin
    Node   Adapter  Mode     Type       Mode      Type      Status
    ----   -------  ---      ---------  -------   --------  -------
    f-a    0e       fc       initiator  -          -        online
    f-a    0f       fc       initiator  -          -        online
    f-a    0g       cna      target     -          -        online
    f-a    0h       cna      target     -          -        online
    f-a    0e       fc       initiator  -          -        online
    f-a    0f       fc       initiator  -          -        online
    f-a    0g       cna      target     -          -        online
    f-a    0h       cna      target     -          -        online
    *>
  2. If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.

    Contact your NetApp representative to obtain the correct SFP+ module.

  3. Examine the output of the ucadmin show command and determine whether the UTA/UTA2 ports have the personality you want.

  4. Take one of the following actions:

    If the CNA ports…​ Then…

    Do not have the personality that you want

    Go to Step 5.

    Have the personality that you want

    Skip Step 5 through Step 12 and go to Step 13.

  5. Take one of the following actions:

    If you are configuring…​ Then…

    Ports on a UTA/UTA2 card

    Go to Step 7

    Onboard UTA/UTA2 ports

    Skip Step 7 and go to Step 8.

  6. If the adapter is in initiator mode, and if the UTA/UTA2 port is online, take the UTA/UTA2 port offline:

    storage disable adapter adapter_name

    Adapters in target mode are automatically offline in Maintenance mode.

  7. If the current configuration does not match the desired use, change the configuration as needed:

    ucadmin modify -m fc|cna -t initiator|target adapter_name

    • -m is the personality mode, FC or 10GbE UTA.

    • -t is the FC4 type, target or initiator.

      Note You must use FC initiator for tape drives, FlexArray Virtualization systems, and MetroCluster configurations. You must use the FC target for SAN clients.
  8. Verify the settings by using the following command and examining its output:

    ucadmin show

  9. Verify the settings:

    If the system…​ Then…

    Has storage disks

    ucadmin show

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    ucadmin show

    The output in the following examples shows that the FC4 type of adapter "1b" is changing to initiator and that the mode of adapters "2a" and "2b" is changing to cna:

    *> ucadmin show
    Node  Adapter  Current Mode  Current Type  Pending Mode  Pending Type  Admin Status
    ----  -------  ------------  ------------  ------------  ------------  ------------
    f-a   1a       fc             initiator    -             -             online
    f-a   1b       fc             target       -             initiator     online
    f-a   2a       fc             target       cna           -             online
    f-a   2b       fc             target       cna           -             online
    4 entries were displayed.
    *>
  10. Place any target ports online by entering one of the following commands, once for each port:

    If the system…​ Then…

    Has storage disks

    network fcp adapter modify -node node_name -adapter adapter_name -state up

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    fcp config adapter_name up

  11. Cable the port.

  1. Take one of the following actions:

    If the system…​ Then…​

    Has storage disks

    Go to the section Map ports from node2 to node4.

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    Return to the section Install and boot node4, and resume at Step 23.

  2. Exit Maintenance mode:

    halt

  3. Boot node into boot menu:

    boot_ontap menu

    If you are upgrading to an A800, go to Step 23.

  4. On node4, go to the boot menu, and using 22/7, select the hidden option boot_after_controller_replacement. At the prompt, enter node2 to reassign the disks of node2 to node4, as per the following example.

    Expand the console output example
    LOADER-A> boot_ontap menu ...
    *******************************
    *                             *
    * Press Ctrl-C for Boot Menu. *
    *                             *
    *******************************
    .
    .
    Please choose one of the following:
    
    (1) Normal Boot.
    (2) Boot without /etc/rc.
    (3) Change password.
    (4) Clean configuration and initialize all disks.
    (5) Maintenance mode boot.
    (6) Update flash from backup config.
    (7) Install new software first.
    (8) Reboot node.
    (9) Configure Advanced Drive Partitioning.
    Selection (1-9)? 22/7
    .
    .
    (boot_after_controller_replacement) Boot after controller upgrade
    (9a)                                Unpartition all disks and remove their ownership information.
    (9b)                                Clean configuration and initialize node with partitioned disks.
    (9c)                                Clean configuration and initialize node with whole disks.
    (9d)                                Reboot the node.
    (9e)                                Return to main boot menu.
    
    Please choose one of the following:
    
    (1) Normal Boot.
    (2) Boot without /etc/rc.
    (3) Change password.
    (4) Clean configuration and initialize all disks.
    (5) Maintenance mode boot.
    (6) Update flash from backup config.
    (7) Install new software first.
    (8) Reboot node.
    (9) Configure Advanced Drive Partitioning.
    Selection (1-9)? boot_after_controller_replacement
    .
    This will replace all flash-based configuration with the last backup to disks. Are you sure you want to continue?: yes
    .
    .
    Controller Replacement: Provide name of the node you would like to replace: <name of the node being replaced>
    .
    .
    Changing sysid of node <node being replaced> disks.
    Fetched sanown old_owner_sysid = 536953334 and calculated old sys id = 536953334
    Partner sysid = 4294967295, owner sysid = 536953334
    .
    .
    .
    Terminated
    <node reboots>
    .
    .
    System rebooting...
    .
    Restoring env file from boot media...
    copy_env_file:scenario = head upgrade
    Successfully restored env file from boot media...
    .
    .
    System rebooting...
    .
    .
    .
    WARNING: System ID mismatch. This usually occurs when replacing a boot device or NVRAM cards!
    Override system ID? {y|n} y
    Login: ...
  5. If the system goes into a reboot loop with the message no disks found, this is because it has reset the ports back to the target mode and therefore is unable to see any disks. Continue with Step 17 through Step 22 to resolve this.

  6. Press Ctrl-C during AUTOBOOT to stop the node at the LOADER> prompt.

  7. At the LOADER prompt, enter maintenance mode:

    boot_ontap maint

  8. In maintenance mode, display all the previously set initiator ports that are now in target mode:

    ucadmin show

    Change the ports back to initiator mode:

    ucadmin modify -m fc -t initiator -f adapter name

  9. Verify that the ports have been changed to initiator mode:

    ucadmin show

  10. Exit maintenance mode:

    halt

    Note

    If you are upgrading from a system that supports external disks to a system that also supports external disks, go to Step 22.

    If you are upgrading from a system that uses external disks to a system that supports both internal and external disks, for example, an AFF A800 system, go to Step 23.

  11. At the LOADER prompt, boot up:

    boot_ontap menu

    Now, on booting, the node can detect all the disks that were previously assigned to it and can boot up as expected.

    When the cluster nodes you are replacing use root volume encryption, ONTAP is unable to read the volume information from the disks. Restore the keys for the root volume:

    1. Return to the special boot menu:

      LOADER> boot_ontap menu

      Please choose one of the following:
      (1) Normal Boot.
      (2) Boot without /etc/rc.
      (3) Change password.
      (4) Clean configuration and initialize all disks.
      (5) Maintenance mode boot.
      (6) Update flash from backup config.
      (7) Install new software first.
      (8) Reboot node.
      (9) Configure Advanced Drive Partitioning.
      (10) Set Onboard Key Manager recovery secrets.
      (11) Configure node for external key management.
      
      Selection (1-11)? 10
    2. Select (10) Set Onboard Key Manager recovery secrets

    3. Enter y at the following prompt:

      This option must be used only in disaster recovery procedures. Are you sure? (y or n): y

    4. At the prompt, enter the key-manager passphrase.

    5. Enter the backup data when prompted.

      Note You must have obtained the passphrase and backup data in the Prepare the nodes for upgrade section of this procedure.
    6. After the system boots to the special boot menu again, run option (1) Normal Boot

      Note You might encounter an error at this stage. If an error occurs, repeat the substeps in Step 22 until the system boots normally.
  12. If you are upgrading from a system with external disks to a system that supports internal and external disks (AFF A800 systems, for example), set the node2 aggregate as the root aggregate to confirm that node4 boots from the root aggregate of node2. To set the root aggregate, go to the boot menu and select option 5 to enter maintenance mode.

    Warning You must perform the following substeps in the exact order shown; failure to do so might cause an outage or even data loss.

    The following procedure sets node4 to boot from the root aggregate of node2:

    1. Enter maintenance mode:

      boot_ontap maint

    2. Check the RAID, plex, and checksum information for the node2 aggregate:

      aggr status -r

    3. Check the status of the node2 aggregate:

      aggr status

    4. If necessary, bring the node2 aggregate online:

      aggr_online root_aggr_from_node2

    5. Prevent the node4 from booting from its original root aggregate:

      aggr offline root_aggr_on_node4

    6. Set the node2 root aggregate as the new root aggregate for node4:

      aggr options aggr_from_node2 root