Set the FC or UTA/UTA2 configuration on node3

Contributors netapp-barbe netapp-aherbin netapp-pcarriga Download PDF of this page

If node3 has onboard FC ports, onboard unified target adapter (UTA/UTA2) ports, or a UTA/UTA2 card, you must configure the settings before completing the rest of the procedure.

About this task

You might need to complete the section Configure FC ports on node3, the section Check and configure UTA/UTA2 ports on node3, or both sections.

NetApp marketing materials might use the term UTA2 to refer to CNA adapters and ports. However, the CLI uses the term CNA.
  • If node3 does not have onboard FC ports, onboard UTA/UTA2 ports, or a UTA/UTA2 card, and you are upgrading a system with storage disks, you can skip to the Verify the node 3 installation section.

  • However, if you have a V-Series system or a system with FlexArray Virtualization software with storage arrays, and node3 does not have onboard FC ports, onboard UTA/UTA ports, or a UTA/UTA2 card, return to the section Install and boot node3 and resume the section at step 23 .

Configure FC ports on node3

If node3 has FC ports, either onboard or on an FC adapter, you must set port configurations on the node before you bring it into service because the ports are not preconfigured. If the ports are not configured, you might experience a disruption in service.

Before you begin

You must have the values of the FC port settings from node1 that you saved in the section Prepare the nodes for upgrade.

About this task

You can skip this section if your system does not have FC configurations. If your system has onboard UTA/UTA2 ports or a UTA/UTA2 card, you configure them in Check and configure UTA/UTA2 ports on node3.

Important: If your system has storage disks, enter the commands in this section at the cluster prompt. If you have a 'V-Series system' or have FlexArray Virtualization Software and are connected to storage arrays, enter commands in this section in Maintenance mode.

Steps
  1. Compare the FC settings on node3 with the settings that you captured earlier from node1.

  2. Take one of the following actions:

    If the system that you are upgrading…​ Then…

    Has storage disks

    In maintenance mode (option 5 at boot menu), modify the FC ports on node3 as needed by using one of the following commands:

    • To program target ports:
      ucadmin modify -m fc -t target <adapter>

    • To program initiator ports:
      ucadmin modify -m fc -t initiator <adapter>
      -t is the FC4 type: target or initiator.

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    In maintenance mode (option 5 at boot menu), modify the FC ports on node3 as needed by using the following command:
    ucadmin modify -m fc -t initiator -f <adapter_port_name>
    -t is the FC4 type, target or initiator.
    Note: The FC ports must be programmed as initiators.

  3. Take one of the following actions:

    If the system that you are upgrading…​ Then…

    Has storage disks

    Verify the new settings by using the following command and examining the output:
    ucadmin show

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    Verify the new settings by using the following command and examining the output:
    ucadmin show

  4. Exit Maintenance mode by using the following command:

    halt

  5. Boot the system from loader prompt by using the following command:

    boot_ontap menu

  6. After you enter the command, wait until the system stops at the boot environment prompt.

  7. Select option 5 from the boot menu for maintenance mode.

  8. Take one of the following actions:

    If the system that you are upgrading…​ Then…

    Has storage disks

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

Check and configure UTA/UTA2 ports on node3

If node3 has onboard UTA/UTA2 ports or a UTA/UTA2 card, you must check the configuration of the ports and possibly reconfigure them, depending on how you want to use the upgraded system.

Before you begin

You must have the correct SFP+ modules for the UTA/UTA2 ports.

About this task

If you want to use a Unified Target Adapter (UTA/UTA2) port for FC, you must first verify how the port is configured.

NetApp marketing materials might use the term UTA2 to refer to CNA adapters and ports. However, the CLI uses the term CNA.

You can use the ucadmin show command to verify the current port configuration:

*> ucadmin show
Adapter Current Mode Current Type Pending Mode Pending Type Admin Status
0e      fc           target       -            initiator    offline
0f      fc           target       -            initiator    offline
0g      fc           target       -            initiator    offline
0h      fc           target       -            initiator    offline
1a      fc           target       -            -            online
1b      fc           target       -            -            online
6 entries were displayed.

UTA/UTA2 ports can be configured into native FC mode or UTA/UTA2 mode. FC mode supports FC initiator and FC target; UTA/UTA2 mode allows concurrent NIC and FCoE traffic sharing the same 10 GbE SFP+ interface and supports FC targets.

UTA/UTA2 ports might be found on an adapter or on the controller, and have the following configurations, but you should check the configuration of the UTA/UTA2 ports on the node3 and change it, if necessary:

  • UTA/UTA2 cards ordered when the controller is ordered are configured before shipment to have the personality you request.

  • UTA/UTA2 cards ordered separately from the controller are shipped with the default FC target personality.

  • Onboard UTA/UTA2 ports on new controllers are configured before shipment to have the personality you request.

    Attention: If your system has storage disks, you enter the commands in this section at the cluster prompt unless directed to enter Maintenance mode. If you have a V- Series system or have FlexArray Virtualization Software and are connected to storage arrays, you enter commands in this section at the Maintenance mode prompt. You must be in Maintenance mode to configure UTA/UTA2 ports.

Steps
  1. Check how the ports are currently configured by entering the following command on node3:

    If the system…​ Then…

    Has storage disks

    No action required.

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    ucadmin show

    The system displays output similar to the following examples:

    *> ucadmin show
    Adapter Current Mode Current Type Pending Mode Pending Type Admin Status
    0e      fc           initiator    -            -            online
    0f      fc           initiator    -            -            online
    0g      cna          target       -            -            online
    0h      cna          target       -            -            online
    0e      fc           initiator    -            -            online
    0f      fc           initiator    -            -            online
    0g      cna          target       -            -            online
    0h      cna          target       -            -            online
    *>
  2. If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.

    Contact your NetApp representative to obtain the correct SFP+ module.

  3. Examine the output of the ucadmin show command and determine whether the UTA/UTA2 ports have the personality you want.

  4. Take one of the following actions:

    If the UTA/UTA2 ports…​ Then…

    Do not have the personality that you want

    Go to Step 5.

    Have the personality that you want

    Skip Step 5 through Step 12 and go to Step 13.

  5. Take one of the following actions:

    If you are configuring…​ Then…

    Ports on a UTA/UTA2 card

    Go to Step 7

    Onboard UTA/UTA2 ports

    Skip Step 7 and go to Step 8.

  6. If the adapter is in initiator mode, and if the UTA/UTA2 port is online, take the UTA/UTA2 port offline by using the following command:

    storage disable adapter <adapter_name>

    Adapters in target mode are automatically offline in Maintenance mode.

  7. If the current configuration does not match the desired use, change the configuration as needed by using the following command:

    ucadmin modify -m fc|cna -t initiator|target <adapter_name>

    • -m is the personality mode, fc or cna.

    • -t is the FC4 type, target or initiator.

      You must use FC initiator for tape drives, FlexArray Virtualization systems, and MetroCluster configurations. You must use the FC target for SAN clients.
  8. Verify the settings by using the following command:

    ucadmin show

  9. Verify the settings by using one of the following commands:

    If the system…​ Then…

    Has storage disks

    ucadmin show

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    ucadmin show

    The output in the following examples shows that the FC4 type of adapter 1b is changing to initiator and that the mode of adapters 2a and 2b is changing to cna:

    *> ucadmin show
    Adapter Current Mode Current Type  Pending Mode Pending Type Admin Status
    1a      fc           initiator     -            -            online
    1b      fc           target        -            initiator    online
    2a      fc           target        cna          -            online
    2b      fc           target        cna          -            online
    *>
  10. Place any target ports online by entering one of the following commands, once for each port:

    If the system…​ Then…

    Has storage disks

    network fcp adapter modify -node <node_name> -adapter<adapter_name> -state up

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    fcp config <adapter_name> up

  11. Cable the port.

  12. Take one of the following actions:

    If the system…​ Then…

    Has storage disks

    Go to Verify the node3 installation.

    Is a V-Series system or has FlexArray Virtualization Software and is connected to storage arrays

    Resume at Step 23.

  13. Exit Maintenance mode by using the following command:

    halt

  14. Boot node into boot menu by running boot_ontap menu. If you are upgrading to an A800, go to Step 23.

  15. On node3, go to the boot menu and using 22/7 and select the hidden option boot_after_controller_replacement. At the prompt, enter node1 to reassign the disks of node1 to node3, as per the following example.

    LOADER-A> boot_ontap menu
    .
    <output truncated>
    .
    All rights reserved.
    *******************************
    *                             *
    * Press Ctrl-C for Boot Menu. *
    *                             *
    *******************************
    .
    <output truncated>
    .
    Please choose one of the following:
    (1)  Normal Boot.
    (2)  Boot without /etc/rc.
    (3)  Change password.
    (4)  Clean configuration and initialize all disks.
    (5)  Maintenance mode boot.
    (6)  Update flash from backup config.
    (7)  Install new software first.
    (8)  Reboot node.
    (9)  Configure Advanced Drive Partitioning.
    (10) Set Onboard Key Manager recovery secrets.
    (11) Configure node for external key management.
    Selection (1-11)? 22/7
    (22/7) Print this secret List
    (25/6) Force boot with multiple filesystem disks missing.
    (25/7) Boot w/ disk labels forced to clean.
    (29/7) Bypass media errors.
    (44/4a) Zero disks if needed and create new flexible root volume.
    (44/7) Assign all disks, Initialize all disks as SPARE, write DDR labels
    .
    <output truncated>
    .
    (wipeconfig)                        Clean all configuration on boot device
    (boot_after_controller_replacement) Boot after controller upgrade
    (boot_after_mcc_transition)         Boot after MCC transition
    (9a)                                Unpartition all disks and remove their ownership information.
    (9b)                                Clean configuration and initialize node with partitioned disks.
    (9c)                                Clean configuration and initialize node with whole disks.
    (9d)                                Reboot the node.
    (9e)                                Return to main boot menu.
    The boot device has changed. System configuration information could be lost. Use option (6) to restore the system configuration, or option (4) to initialize all disks and setup a new system.
    Normal Boot is prohibited.
    Please choose one of the following:
    (1)  Normal Boot.
    (2)  Boot without /etc/rc.
    (3)  Change password.
    (4)  Clean configuration and initialize all disks.
    (5)  Maintenance mode boot.
    (6)  Update flash from backup config.
    (7)  Install new software first.
    (8)  Reboot node.
    (9)  Configure Advanced Drive Partitioning.
    (10) Set Onboard Key Manager recovery secrets.
    (11) Configure node for external key management.
    Selection (1-11)? boot_after_controller_replacement
    This will replace all flash-based configuration with the last backup to disks. Are you sure you want to continue?: yes
    .
    <output truncated>
    .
    Controller Replacement: Provide name of the node you would like to replace:<nodename of the node being replaced>
    Changing sysid of node node1 disks.
    Fetched sanown old_owner_sysid = 536940063 and calculated old sys id = 536940063
    Partner sysid = 4294967295, owner sysid = 536940063
    .
    <output truncated>
    .
    varfs_backup_restore: restore using /mroot/etc/varfs.tgz
    varfs_backup_restore: attempting to restore /var/kmip to the boot device
    varfs_backup_restore: failed to restore /var/kmip to the boot device
    varfs_backup_restore: attempting to restore env file to the boot device
    varfs_backup_restore: successfully restored env file to the boot device wrote key file "/tmp/rndc.key"
    varfs_backup_restore: timeout waiting for login
    varfs_backup_restore: Rebooting to load the new varfs
    Terminated
    <node reboots>
    System rebooting...
    .
    Restoring env file from boot media...
    copy_env_file:scenario = head upgrade
    Successfully restored env file from boot media...
    Rebooting to load the restored env file...
    .
    System rebooting...
    .
    <output truncated>
    .
    WARNING: System ID mismatch. This usually occurs when replacing a boot device or NVRAM cards!
    Override system ID? {y|n} y
    .
    Login:
    In the above console output example, ONTAP will prompt you for the partner node name if the system uses Advanced Disk Partitioning (ADP) disks.
  16. If the system goes into a reboot loop with the message no disks found, it indicates that the system has reset the FC or UTA/UTA2 ports back to the target mode and therefore is unable to see any disks. To resolve this continue with Step 17 to Step 22, or go to section Verify the node3 installation.

  17. Press Ctrl-C during autoboot to stop the node at the LOADER> prompt.

  18. At the loader prompt, enter maintenance mode by using the following command:

    boot_ontap maint

  19. In maintenance mode, display all the previously set initiator ports that are now in target mode by using the following command:

    ucadmin show

    Change the ports back to initiator mode by using the following command:

    ucadmin modify -m fc -t initiator -f <adapter name>

  20. Verify that the ports have been changed to initiator mode by using the following command:

    ucadmin show

  21. Exit maintenance mode by using the following command:

    halt

  22. At the loader prompt boot up, by using the following command:

    boot_ontap

    Now, on booting, the node can detect all the disks that were previously assigned to it and can boot up as expected.

  23. If you are upgrading from a system with external disks to a system that supports internal and external disks (AFF A800 systems, for example), set the node1 aggregate as the root aggregate to ensure node3 boots from the root aggregate of node1. To set the root aggregate, go to the boot menu and select option 5 to enter maintenance mode.

    You must perform the following substeps in the exact order shown; failure to do so might cause an outage or even data loss.

    The following procedure sets node3 to boot from the root aggregate of node1:

    1. Enter maintenance mode by using the following command:

      boot_ontap maint

    2. Check the RAID, plex, and checksum information for the node1 aggregate by using the following command:

      aggr status -r

    3. Check the status of the node1 aggregate by using the following command:

      aggr status

    4. If necessary, bring the node1 aggregate online by using the following command:

      aggr_online root_aggr_from_<node1>

    5. Prevent the node3 from booting from its original root aggregate by using the following command:

      aggr offline <root_aggr_on_node3>

    6. Set the node1 root aggregate as the new root aggregate for node3 by using the following command:

      aggr options aggr_from_<node1> root

    7. Verify that the root aggregate of node3 is offline and the root aggregate for the disks brought over from node1 is online and set to root by using the following command:

      aggr status

      Failing to perform the previous substep might cause node3 to boot from the internal root aggregate, or it might cause the system to assume a new cluster configuration exists or prompt you to identify one.

      The following shows an example of the command output:

      Aggr            State   Status    Options
      aggr 0_nst_fas  8080_15 online    raid_dp, aggr root,  nosnap=on
                                        fast zeroed, 64-bit
      aggr            0       offline   raid_dp, aggr, diskroot
                                        fast zeroed, 64-bit