Configure the MetroCluster for transition
To prepare the configuration for transition you add the new nodes to the existing MetroCluster configuration and then move data to the new nodes.
Sending a custom AutoSupport message prior to maintenance
Before performing the maintenance, you should issue an AutoSupport message to notify NetApp technical support that maintenance is underway. Informing technical support that maintenance is underway prevents them from opening a case on the assumption that a disruption has occurred.
This task must be performed on each MetroCluster site.
-
To prevent automatic support case generation, send an Autosupport message to indicate maintenance is underway:
system node autosupport invoke -node * -type all -message MAINT=maintenance-window-in-hours
“maintenance-window-in-hours” specifies the length of the maintenance window, with a maximum of 72 hours. If the maintenance is completed before the time has elapsed, you can invoke an AutoSupport message indicating the end of the maintenance period:
system node autosupport invoke -node * -type all -message MAINT=end
-
Repeat the command on the partner cluster.
Enabling transition mode and disabling cluster HA
You must enable the MetroCluster transition mode to allow the old and new nodes to operate together in the MetroCluster configuration, and disable cluster HA.
-
Enable transition:
-
Change to the advanced privilege level:
set -privilege advanced
-
Enable transition mode:
metrocluster transition enable -transition-mode non-disruptive
Run this command on one cluster only. cluster_A::*> metrocluster transition enable -transition-mode non-disruptive Warning: This command enables the start of a "non-disruptive" MetroCluster FC-to-IP transition. It allows the addition of hardware for another DR group that uses IP fabrics, and the removal of a DR group that uses FC fabrics. Clients will continue to access their data during a non-disruptive transition. Automatic unplanned switchover will also be disabled by this command. Do you want to continue? {y|n}: y cluster_A::*>
-
Return to the admin privilege level:
set -privilege admin
-
-
Verify that transition is enabled on both the clusters.
cluster_A::> metrocluster transition show-mode Transition Mode non-disruptive cluster_A::*> cluster_B::*> metrocluster transition show-mode Transition Mode non-disruptive Cluster_B::>
-
Disable cluster HA.
You must run this command on both clusters. cluster_A::*> cluster ha modify -configured false Warning: This operation will unconfigure cluster HA. Cluster HA must be configured on a two-node cluster to ensure data access availability in the event of storage failover. Do you want to continue? {y|n}: y Notice: HA is disabled. cluster_A::*> cluster_B::*> cluster ha modify -configured false Warning: This operation will unconfigure cluster HA. Cluster HA must be configured on a two-node cluster to ensure data access availability in the event of storage failover. Do you want to continue? {y|n}: y Notice: HA is disabled. cluster_B::*>
-
Verify that cluster HA is disabled.
You must run this command on both clusters. cluster_A::> cluster ha show High Availability Configured: false Warning: Cluster HA has not been configured. Cluster HA must be configured on a two-node cluster to ensure data access availability in the event of storage failover. Use the "cluster ha modify -configured true" command to configure cluster HA. cluster_A::> cluster_B::> cluster ha show High Availability Configured: false Warning: Cluster HA has not been configured. Cluster HA must be configured on a two-node cluster to ensure data access availability in the event of storage failover. Use the "cluster ha modify -configured true" command to configure cluster HA. cluster_B::>
Joining the MetroCluster IP nodes to the clusters
You must add the four new MetroCluster IP nodes to the existing MetroCluster configuration.
You must perform this task on both clusters.
-
Add the MetroCluster IP nodes to the existing MetroCluster configuration.
-
Join the first MetroCluster IP node (node_A_3-IP) to the existing MetroCluster FC configuration.
Welcome to the cluster setup wizard. You can enter the following commands at any time: "help" or "?" - if you want to have a question clarified, "back" - if you want to change previously answered questions, and "exit" or "quit" - if you want to quit the cluster setup wizard. Any changes you made before quitting will be saved. You can return to cluster setup at any time by typing "cluster setup". To accept a default or omit a question, do not enter a value. This system will send event messages and periodic reports to NetApp Technical Support. To disable this feature, enter autosupport modify -support disable within 24 hours. Enabling AutoSupport can significantly speed problem determination and resolution, should a problem occur on your system. For further information on AutoSupport, see: http://support.netapp.com/autosupport/ Type yes to confirm and continue {yes}: yes Enter the node management interface port [e0M]: Enter the node management interface IP address: 172.17.8.93 Enter the node management interface netmask: 255.255.254.0 Enter the node management interface default gateway: 172.17.8.1 A node management interface on port e0M with IP address 172.17.8.93 has been created. Use your web browser to complete cluster setup by accessing https://172.17.8.93 Otherwise, press Enter to complete cluster setup using the command line interface: Do you want to create a new cluster or join an existing cluster? {create, join}: join Existing cluster interface configuration found: Port MTU IP Netmask e0c 9000 169.254.148.217 255.255.0.0 e0d 9000 169.254.144.238 255.255.0.0 Do you want to use this configuration? {yes, no} [yes]: yes . . .
-
Join the second MetroCluster IP node (node_A_4-IP) to the existing MetroCluster FC configuration.
-
-
Repeat these steps to join node_B_3-IP and node_B_4-IP to cluster_B.
Configuring intercluster LIFs, creating the MetroCluster interfaces, and mirroring root aggregates
You must create cluster peering LIFs, create the MetroCluster interfaces on the new MetroCluster IP nodes.
The home port used in the examples are platform-specific. You should use the appropriate home port specific to MetroCluster IP node platform.
-
On the new MetroCluster IP nodes, configure the intercluster LIFs.
-
On each site, verify that cluster peering is configured:
cluster peer show
The following example shows the cluster peering configuration on cluster_A:
cluster_A:> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- cluster_B 1-80-000011 Available ok
The following example shows the cluster peering configuration on cluster_B:
cluster_B:> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- cluster_A 1-80-000011 Available ok
-
Configure the DR group for the MetroCluster IP nodes:
metrocluster configuration-settings dr-group create -partner-cluster
cluster_A::> metrocluster configuration-settings dr-group create -partner-cluster cluster_B -local-node node_A_3-IP -remote-node node_B_3-IP [Job 259] Job succeeded: DR Group Create is successful. cluster_A::>
-
Verify that the DR group is created.
metrocluster configuration-settings dr-group show
cluster_A::> metrocluster configuration-settings dr-group show DR Group ID Cluster Node DR Partner Node ----------- -------------------------- ------------------ ------------------ 2 cluster_A node_A_3-IP node_B_3-IP node_A_4-IP node_B_4-IP cluster_B node_B_3-IP node_A_3-IP node_B_4-IP node_A_4-IP 4 entries were displayed. cluster_A::>
You will notice that the DR group for the old MetroCluster FC nodes (DR Group 1) is not listed when you run the
metrocluster configuration-settings dr-group show
command.You can use
metrocluster node show
command on both sites to list all nodes.cluster_A::> metrocluster node show DR Configuration DR Group Cluster Node State Mirroring Mode ----- ------- ------------------ -------------- --------- -------------------- 1 cluster_A node_A_1-FC configured enabled normal node_A_2-FC configured enabled normal cluster_B node_B_1-FC configured enabled normal node_B_2-FC configured enabled normal 2 cluster_A node_A_3-IP ready to configure - - node_A_4-IP ready to configure - - cluster_B::> metrocluster node show DR Configuration DR Group Cluster Node State Mirroring Mode ----- ------- ------------------ -------------- --------- -------------------- 1 cluster_B node_B_1-FC configured enabled normal node_B_2-FC configured enabled normal cluster_A node_A_1-FC configured enabled normal node_A_2-FC configured enabled normal 2 cluster_B node_B_3-IP ready to configure - - node_B_4-IP ready to configure - -
-
Configure the MetroCluster IP interfaces for the newly joined MetroCluster IP nodes:
metrocluster configuration-settings interface create -cluster-name
See Configuring and connecting the MetroCluster IP interfaces for considerations when configuring the IP interfaces.
You can configure the MetroCluster IP interfaces from either cluster. cluster_A::> metrocluster configuration-settings interface create -cluster-name cluster_A -home-node node_A_3-IP -home-port e1a -address 172.17.26.10 -netmask 255.255.255.0 [Job 260] Job succeeded: Interface Create is successful. cluster_A::> metrocluster configuration-settings interface create -cluster-name cluster_A -home-node node_A_3-IP -home-port e1b -address 172.17.27.10 -netmask 255.255.255.0 [Job 261] Job succeeded: Interface Create is successful. cluster_A::> metrocluster configuration-settings interface create -cluster-name cluster_A -home-node node_A_4-IP -home-port e1a -address 172.17.26.11 -netmask 255.255.255.0 [Job 262] Job succeeded: Interface Create is successful. cluster_A::> :metrocluster configuration-settings interface create -cluster-name cluster_A -home-node node_A_4-IP -home-port e1b -address 172.17.27.11 -netmask 255.255.255.0 [Job 263] Job succeeded: Interface Create is successful. cluster_A::> metrocluster configuration-settings interface create -cluster-name cluster_B -home-node node_B_3-IP -home-port e1a -address 172.17.26.12 -netmask 255.255.255.0 [Job 264] Job succeeded: Interface Create is successful. cluster_A::> metrocluster configuration-settings interface create -cluster-name cluster_B -home-node node_B_3-IP -home-port e1b -address 172.17.27.12 -netmask 255.255.255.0 [Job 265] Job succeeded: Interface Create is successful. cluster_A::> metrocluster configuration-settings interface create -cluster-name cluster_B -home-node node_B_4-IP -home-port e1a -address 172.17.26.13 -netmask 255.255.255.0 [Job 266] Job succeeded: Interface Create is successful. cluster_A::> metrocluster configuration-settings interface create -cluster-name cluster_B -home-node node_B_4-IP -home-port e1b -address 172.17.27.13 -netmask 255.255.255.0 [Job 267] Job succeeded: Interface Create is successful.
-
Verify the MetroCluster IP interfaces are created:
metrocluster configuration-settings interface show
cluster_A::>metrocluster configuration-settings interface show DR Config Group Cluster Node Network Address Netmask Gateway State ----- ------- ------- --------------- --------------- --------------- --------- 2 cluster_A node_A_3-IP Home Port: e1a 172.17.26.10 255.255.255.0 - completed Home Port: e1b 172.17.27.10 255.255.255.0 - completed node_A_4-IP Home Port: e1a 172.17.26.11 255.255.255.0 - completed Home Port: e1b 172.17.27.11 255.255.255.0 - completed cluster_B node_B_3-IP Home Port: e1a 172.17.26.13 255.255.255.0 - completed Home Port: e1b 172.17.27.13 255.255.255.0 - completed node_B_3-IP Home Port: e1a 172.17.26.12 255.255.255.0 - completed Home Port: e1b 172.17.27.12 255.255.255.0 - completed 8 entries were displayed. cluster_A>
-
Connect the MetroCluster IP interfaces:
metrocluster configuration-settings connection connect
This command might take several minutes to complete. cluster_A::> metrocluster configuration-settings connection connect cluster_A::>
-
Verify the connections are properly established:
metrocluster configuration-settings connection show
cluster_A::> metrocluster configuration-settings connection show DR Source Destination Group Cluster Node Network Address Network Address Partner Type Config State ----- ------- ------- --------------- --------------- ------------ ------------ 2 cluster_A node_A_3-IP** Home Port: e1a 172.17.26.10 172.17.26.11 HA Partner completed Home Port: e1a 172.17.26.10 172.17.26.12 DR Partner completed Home Port: e1a 172.17.26.10 172.17.26.13 DR Auxiliary completed Home Port: e1b 172.17.27.10 172.17.27.11 HA Partner completed Home Port: e1b 172.17.27.10 172.17.27.12 DR Partner completed Home Port: e1b 172.17.27.10 172.17.27.13 DR Auxiliary completed node_A_4-IP Home Port: e1a 172.17.26.11 172.17.26.10 HA Partner completed Home Port: e1a 172.17.26.11 172.17.26.13 DR Partner completed Home Port: e1a 172.17.26.11 172.17.26.12 DR Auxiliary completed Home Port: e1b 172.17.27.11 172.17.27.10 HA Partner completed Home Port: e1b 172.17.27.11 172.17.27.13 DR Partner completed Home Port: e1b 172.17.27.11 172.17.27.12 DR Auxiliary completed DR Source Destination Group Cluster Node Network Address Network Address Partner Type Config State ----- ------- ------- --------------- --------------- ------------ ------------ 2 cluster_B node_B_4-IP Home Port: e1a 172.17.26.13 172.17.26.12 HA Partner completed Home Port: e1a 172.17.26.13 172.17.26.11 DR Partner completed Home Port: e1a 172.17.26.13 172.17.26.10 DR Auxiliary completed Home Port: e1b 172.17.27.13 172.17.27.12 HA Partner completed Home Port: e1b 172.17.27.13 172.17.27.11 DR Partner completed Home Port: e1b 172.17.27.13 172.17.27.10 DR Auxiliary completed node_B_3-IP Home Port: e1a 172.17.26.12 172.17.26.13 HA Partner completed Home Port: e1a 172.17.26.12 172.17.26.10 DR Partner completed Home Port: e1a 172.17.26.12 172.17.26.11 DR Auxiliary completed Home Port: e1b 172.17.27.12 172.17.27.13 HA Partner completed Home Port: e1b 172.17.27.12 172.17.27.10 DR Partner completed Home Port: e1b 172.17.27.12 172.17.27.11 DR Auxiliary completed 24 entries were displayed. cluster_A::>
-
Verify disk autoassignment and partitioning:
disk show -pool Pool1
cluster_A::> disk show -pool Pool1 Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner ---------------- ---------- ----- --- ------- ----------- --------- -------- 1.10.4 - 10 4 SAS remote - node_B_2 1.10.13 - 10 13 SAS remote - node_B_2 1.10.14 - 10 14 SAS remote - node_B_1 1.10.15 - 10 15 SAS remote - node_B_1 1.10.16 - 10 16 SAS remote - node_B_1 1.10.18 - 10 18 SAS remote - node_B_2 ... 2.20.0 546.9GB 20 0 SAS aggregate aggr0_rha1_a1 node_a_1 2.20.3 546.9GB 20 3 SAS aggregate aggr0_rha1_a2 node_a_2 2.20.5 546.9GB 20 5 SAS aggregate rha1_a1_aggr1 node_a_1 2.20.6 546.9GB 20 6 SAS aggregate rha1_a1_aggr1 node_a_1 2.20.7 546.9GB 20 7 SAS aggregate rha1_a2_aggr1 node_a_2 2.20.10 546.9GB 20 10 SAS aggregate rha1_a1_aggr1 node_a_1 ... 43 entries were displayed. cluster_A::>
On systems configured for Advanced Drive Partitioning (ADP), the container type is "shared" rather than "remote" as shown in the example output. -
Mirror the root aggregates:
storage aggregate mirror -aggregate aggr0_node_A_3_IP
You must complete this step on each MetroCluster IP node. cluster_A::> aggr mirror -aggregate aggr0_node_A_3_IP Info: Disks would be added to aggregate "aggr0_node_A_3_IP"on node "node_A_3-IP" in the following manner: Second Plex RAID Group rg0, 3 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 4.20.0 SAS - - parity 4.20.3 SAS - - data 4.20.1 SAS 546.9GB 558.9GB Aggregate capacity available for volume use would be 467.6GB. Do you want to continue? {y|n}: y cluster_A::>
-
Verify that the root aggregates are mirrored:
storage aggregate show
cluster_A::> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_node_A_1_FC 349.0GB 16.84GB 95% online 1 node_A_1-FC raid_dp, mirrored, normal aggr0_node_A_2_FC 349.0GB 16.84GB 95% online 1 node_A_2-FC raid_dp, mirrored, normal aggr0_node_A_3_IP 467.6GB 22.63GB 95% online 1 node_A_3-IP raid_dp, mirrored, normal aggr0_node_A_4_IP 467.6GB 22.62GB 95% online 1 node_A_4-IP raid_dp, mirrored, normal aggr_data_a1 1.02TB 1.01TB 1% online 1 node_A_1-FC raid_dp, mirrored, normal aggr_data_a2 1.02TB 1.01TB 1% online 1 node_A_2-FC raid_dp, mirrored,
Finalizing the addition of the MetroCluster IP nodes
You must incorporate the new DR group into the MetroCluster configuration and create mirrored data aggregates on the new nodes.
-
Configure the MetroCluster depending on whether it has a single or multiple data aggregates:
If your MetroCluster configuration has…
Then do this…
Multiple data aggregates
From any node's prompt, configure MetroCluster:
metrocluster configure <node-name>
You must run metrocluster configure
and notmetrocluster configure -refresh true
A single mirrored data aggregate
-
From any node's prompt, change to the advanced privilege level:
set -privilege advanced
You must respond with
y
when you are prompted to continue into advanced mode and you see the advanced mode prompt (*>). -
Configure the MetroCluster with the
-allow-with-one-aggregate true
parameter:metrocluster configure -allow-with-one-aggregate true -node-name <node-name>
-
Return to the admin privilege level:
set -privilege admin
The best practice is to have multiple mirrored data aggregates. When there is only one mirrored aggregate, there is less protection because the metadata volumes are located on the same aggregate rather than on separate aggregates. -
-
Reboot each of the new nodes:
node reboot -node <node_name> -inhibit-takeover true
You don't need to reboot the nodes in a specific order, but you should wait until one node is fully booted and all connections are established before rebooting the next node. -
Verify that the nodes are added to their DR group:
metrocluster node show
cluster_A::> metrocluster node show DR Configuration DR Group Cluster Node State Mirroring Mode ----- ------- ------------------ -------------- --------- -------------------- 1 cluster_A node-A-1-FC configured enabled normal node-A-2-FC configured enabled normal Cluster-B node-B-1-FC configured enabled normal node-B-2-FC configured enabled normal 2 cluster_A node-A-3-IP configured enabled normal node-A-4-IP configured enabled normal Cluster-B node-B-3-IP configured enabled normal node-B-4-IP configured enabled normal 8 entries were displayed. cluster_A::>
-
Create mirrored data aggregates on each of the new MetroCluster nodes:
storage aggregate create -aggregate aggregate-name -node node-name -diskcount no-of-disks -mirror true
You must create at least one mirrored data aggregate per site. It is recommended to have two mirrored data aggregates per site on MetroCluster IP nodes to host the MDV volumes, however a single aggregate per site is supported (but not recommended). It is acceptable that one site of the MetroCluster has a single mirrored data aggregate and the other site has more than one mirrored data aggregate. The following example shows the creation of an aggregate on node_A_3-IP.
cluster_A::> storage aggregate create -aggregate data_a3 -node node_A_3-IP -diskcount 10 -mirror t Info: The layout for aggregate "data_a3" on node "node_A_3-IP" would be: First Plex RAID Group rg0, 5 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 5.10.15 SAS - - parity 5.10.16 SAS - - data 5.10.17 SAS 546.9GB 547.1GB data 5.10.18 SAS 546.9GB 558.9GB data 5.10.19 SAS 546.9GB 558.9GB Second Plex RAID Group rg0, 5 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 4.20.17 SAS - - parity 4.20.14 SAS - - data 4.20.18 SAS 546.9GB 547.1GB data 4.20.19 SAS 546.9GB 547.1GB data 4.20.16 SAS 546.9GB 547.1GB Aggregate capacity available for volume use would be 1.37TB. Do you want to continue? {y|n}: y [Job 440] Job succeeded: DONE cluster_A::>
-
Verify that all nodes in the cluster are healthy:
cluster show
The output should display
true
for thehealth
field for all nodes. -
Confirm that takeover is possible and the nodes are connected by running the following command on both clusters:
storage failover show
cluster_A::> storage failover show Takeover Node Partner Possible State Description -------------- -------------------- --------- ------------------ Node_FC_1 Node_FC_2 true Connected to Node_FC_2 Node_FC_2 Node_FC_1 true Connected to Node_FC_1 Node_IP_1 Node_IP_2 true Connected to Node_IP_2 Node_IP_2 Node_IP_1 true Connected to Node_IP_1
-
Confirm that all disks attached to the newly-joined MetroCluster IP nodes are present:
disk show
-
Verify the health of the MetroCluster configuration by running the following commands:
-
metrocluster check run
-
metrocluster check show
-
metrocluster interconnect mirror show
-
metrocluster interconnect adapter show
-
-
Move the MDV_CRS volumes from the old nodes to the new nodes in advanced privilege.
-
Display the volumes to identify the MDV volumes:
If you have a single mirrored data aggregate per site then move both the MDV volumes to this single aggregate. If you have two or more mirrored data aggregates, then move each MDV volume to a different aggregate. The following example shows the MDV volumes in the volume show output:
cluster_A::> volume show Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- ... cluster_A MDV_CRS_2c78e009ff5611e9b0f300a0985ef8c4_A aggr_b1 - RW - - - cluster_A MDV_CRS_2c78e009ff5611e9b0f300a0985ef8c4_B aggr_b2 - RW - - - cluster_A MDV_CRS_d6b0b313ff5611e9837100a098544e51_A aggr_a1 online RW 10GB 9.50GB 0% cluster_A MDV_CRS_d6b0b313ff5611e9837100a098544e51_B aggr_a2 online RW 10GB 9.50GB 0% ... 11 entries were displayed.mple
-
Set the advanced privilege level:
set -privilege advanced
-
Move the MDV volumes, one at a time:
volume move start -volume mdv-volume -destination-aggregate aggr-on-new-node -vserver vserver-name
The following example shows the command and output for moving MDV_CRS_d6b0b313ff5611e9837100a098544e51_A to aggregate data_a3 on node_A_3.
cluster_A::*> vol move start -volume MDV_CRS_d6b0b313ff5611e9837100a098544e51_A -destination-aggregate data_a3 -vserver cluster_A Warning: You are about to modify the system volume "MDV_CRS_d6b0b313ff5611e9837100a098544e51_A". This might cause severe performance or stability problems. Do not proceed unless directed to do so by support. Do you want to proceed? {y|n}: y [Job 494] Job is queued: Move "MDV_CRS_d6b0b313ff5611e9837100a098544e51_A" in Vserver "cluster_A" to aggregate "data_a3". Use the "volume move show -vserver cluster_A -volume MDV_CRS_d6b0b313ff5611e9837100a098544e51_A" command to view the status of this operation.
-
Use the volume show command to check that the MDV volume has been successfully moved:
volume show mdv-name
The following output shows that the MDV volume has been successfully moved.
cluster_A::*> vol show MDV_CRS_d6b0b313ff5611e9837100a098544e51_B Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- cluster_A MDV_CRS_d6b0b313ff5611e9837100a098544e51_B aggr_a2 online RW 10GB 9.50GB 0%
-
Return to admin mode:
set -privilege admin
-