Planning
The procedures to migrate SAN resources using FLI are documented in NetApp ONTAP Foreign LUN Import Documentation.
From a database and host point of view, no special steps are required. After the FC zones are updated and the LUNs become available on ONTAP, the LVM should be able to read the LVM metadata from the LUNs. Also, the volume groups are ready for use with no further configuration steps. In rare cases, environments might include configuration files that were hard-coded with references to the prior storage array. For example, a Linux system that included /etc/multipath.conf
rules that referenced a WWN of a given device must be updated to reflect the changes introduced by FLI.
Reference the NetApp Compatibility Matrix for information on supported configurations. If your environment is not included, contact your NetApp representative for assistance. |
This example shows the migration of both ASM and LVM LUNs hosted on a Linux server. FLI is supported on other operating systems, and, although the host-side commands might differ, the principles are the same, and the ONTAP procedures are identical.
Identify LVM LUNs
The first step in preparation is to identify the LUNs to be migrated. In the example shown here, two SAN-based file systems are mounted at /orabin
and /backups
.
[root@host1 ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/rhel-root 52403200 8811464 43591736 17% / devtmpfs 65882776 0 65882776 0% /dev ... fas8060-nfs-public:/install 199229440 119368128 79861312 60% /install /dev/mapper/sanvg-lvorabin 20961280 12348476 8612804 59% /orabin /dev/mapper/sanvg-lvbackups 73364480 62947536 10416944 86% /backups
The name of the volume group can be extracted from the device name, which uses the format (volume group name)-(logical volume name). In this case, the volume group is called sanvg
.
The pvdisplay
command can be used as follows to identify the LUNs that support this volume group. In this case, there are 10 LUNs that make up the sanvg
volume group.
[root@host1 ~]# pvdisplay -C -o pv_name,pv_size,pv_fmt,vg_name PV PSize VG /dev/mapper/3600a0980383030445424487556574266 10.00g sanvg /dev/mapper/3600a0980383030445424487556574267 10.00g sanvg /dev/mapper/3600a0980383030445424487556574268 10.00g sanvg /dev/mapper/3600a0980383030445424487556574269 10.00g sanvg /dev/mapper/3600a098038303044542448755657426a 10.00g sanvg /dev/mapper/3600a098038303044542448755657426b 10.00g sanvg /dev/mapper/3600a098038303044542448755657426c 10.00g sanvg /dev/mapper/3600a098038303044542448755657426d 10.00g sanvg /dev/mapper/3600a098038303044542448755657426e 10.00g sanvg /dev/mapper/3600a098038303044542448755657426f 10.00g sanvg /dev/sda2 278.38g rhel
Identify ASM LUNs
ASM LUNs must also be migrated. To obtain the number of LUNs and LUN paths from sqlplus as the sysasm user, run the following command:
SQL> select path||' '||os_mb from v$asm_disk; PATH||''||OS_MB -------------------------------------------------------------------------------- /dev/oracleasm/disks/ASM0 10240 /dev/oracleasm/disks/ASM9 10240 /dev/oracleasm/disks/ASM8 10240 /dev/oracleasm/disks/ASM7 10240 /dev/oracleasm/disks/ASM6 10240 /dev/oracleasm/disks/ASM5 10240 /dev/oracleasm/disks/ASM4 10240 /dev/oracleasm/disks/ASM1 10240 /dev/oracleasm/disks/ASM3 10240 /dev/oracleasm/disks/ASM2 10240 10 rows selected. SQL>
FC network changes
The current environment contains 20 LUNs to be migrated. Update the current SAN so that ONTAP can access the current LUNs. Data is not migrated yet, but ONTAP must read configuration information from the current LUNs to create the new home for that data.
At a minimum, at least one HBA port on the AFF/FAS system must be configured as an initiator port. In addition, the FC zones must be updated so that ONTAP can access the LUNs on the foreign storage array. Some storage arrays have LUN masking configured, which limits which WWNs can access a given LUN. In such cases, LUN masking must also be updated to grant access to the ONTAP WWNs.
After this step is completed, ONTAP should be able to view the foreign storage array with the storage array show
command. The key field it returns is the prefix that is used to identify the foreign LUN on the system. In the example below, the LUNs on the foreign array FOREIGN_1
appear within ONTAP using the prefix of FOR-1
.
Identify foreign array
Cluster01::> storage array show -fields name,prefix name prefix ------------- ------ FOREIGN_1 FOR-1 Cluster01::>
Identify foreign LUNs
The LUNs can be listed by passing the array-name
to the storage disk show
command. The data returned is referenced multiple times during the migration procedure.
Cluster01::> storage disk show -array-name FOREIGN_1 -fields disk,serial disk serial-number -------- ------------- FOR-1.1 800DT$HuVWBX FOR-1.2 800DT$HuVWBZ FOR-1.3 800DT$HuVWBW FOR-1.4 800DT$HuVWBY FOR-1.5 800DT$HuVWB/ FOR-1.6 800DT$HuVWBa FOR-1.7 800DT$HuVWBd FOR-1.8 800DT$HuVWBb FOR-1.9 800DT$HuVWBc FOR-1.10 800DT$HuVWBe FOR-1.11 800DT$HuVWBf FOR-1.12 800DT$HuVWBg FOR-1.13 800DT$HuVWBi FOR-1.14 800DT$HuVWBh FOR-1.15 800DT$HuVWBj FOR-1.16 800DT$HuVWBk FOR-1.17 800DT$HuVWBm FOR-1.18 800DT$HuVWBl FOR-1.19 800DT$HuVWBo FOR-1.20 800DT$HuVWBn 20 entries were displayed. Cluster01::>
Register foreign array LUNs as import candidates
The foreign LUNs are initially classified as any particular LUN type. Before data can be imported, the LUNs must be tagged as foreign and therefore a candidate for the import process. This step is completed by passing the serial number to the storage disk modify
command, as shown in the following example. Note that this process tags only the LUN as foreign within ONTAP. No data is written to the foreign LUN itself.
Cluster01::*> storage disk modify {-serial-number 800DT$HuVWBW} -is-foreign true Cluster01::*> storage disk modify {-serial-number 800DT$HuVWBX} -is-foreign true ... Cluster01::*> storage disk modify {-serial-number 800DT$HuVWBn} -is-foreign true Cluster01::*> storage disk modify {-serial-number 800DT$HuVWBo} -is-foreign true Cluster01::*>
Create volumes to host migrated LUNs
A volume is needed to host the migrated LUNs. The exact volume configuration depends on the overall plan to leverage ONTAP features. In this example, the ASM LUNs are placed into one volume and the LVM LUNs are placed in a second volume. Doing so allows you to manage the LUNs as independent groups for purposes such as tiering, creation of snapshots, or setting QoS controls.
Set the snapshot-policy `to `none
. The migration process can include a great deal of data turnover. Therefore, there might be a large increase in space consumption if snapshots are created by accident because unwanted data is captured in the snapshots.
Cluster01::> volume create -volume new_asm -aggregate data_02 -size 120G -snapshot-policy none [Job 1152] Job succeeded: Successful Cluster01::> volume create -volume new_lvm -aggregate data_02 -size 120G -snapshot-policy none [Job 1153] Job succeeded: Successful Cluster01::>
Create ONTAP LUNs
After the volumes are created, the new LUNs must be created. Normally, the creation of a LUN requires the user to specify such information as the LUN size, but in this case the foreign-disk argument is passed to the command. As a result, ONTAP replicates the current LUN configuration data from the specified serial number. It also uses the LUN geometry and partition table data to adjust LUN alignment and establish optimum performance.
In this step, serial numbers must be cross-referenced against the foreign array to make sure that the correct foreign LUN is matched to the correct new LUN.
Cluster01::*> lun create -vserver vserver1 -path /vol/new_asm/LUN0 -ostype linux -foreign-disk 800DT$HuVWBW Created a LUN of size 10g (10737418240) Cluster01::*> lun create -vserver vserver1 -path /vol/new_asm/LUN1 -ostype linux -foreign-disk 800DT$HuVWBX Created a LUN of size 10g (10737418240) ... Created a LUN of size 10g (10737418240) Cluster01::*> lun create -vserver vserver1 -path /vol/new_lvm/LUN8 -ostype linux -foreign-disk 800DT$HuVWBn Created a LUN of size 10g (10737418240) Cluster01::*> lun create -vserver vserver1 -path /vol/new_lvm/LUN9 -ostype linux -foreign-disk 800DT$HuVWBo Created a LUN of size 10g (10737418240)
Create import relationships
The LUNs have now been created but are not configured as a replication destination. Before this step can be taken, the LUNs must first be placed offline. This extra step is designed to protect data from user errors. If ONTAP allowed a migration to be performed on an online LUN, it would create a risk that a typographical error could result in overwriting active data. The extra step of forcing the user to first take a LUN offline helps verify that the correct target LUN is used as a migration destination.
Cluster01::*> lun offline -vserver vserver1 -path /vol/new_asm/LUN0 Warning: This command will take LUN "/vol/new_asm/LUN0" in Vserver "vserver1" offline. Do you want to continue? {y|n}: y Cluster01::*> lun offline -vserver vserver1 -path /vol/new_asm/LUN1 Warning: This command will take LUN "/vol/new_asm/LUN1" in Vserver "vserver1" offline. Do you want to continue? {y|n}: y ... Warning: This command will take LUN "/vol/new_lvm/LUN8" in Vserver "vserver1" offline. Do you want to continue? {y|n}: y Cluster01::*> lun offline -vserver vserver1 -path /vol/new_lvm/LUN9 Warning: This command will take LUN "/vol/new_lvm/LUN9" in Vserver "vserver1" offline. Do you want to continue? {y|n}: y
After the LUNs are offline, you can establish the import relationship by passing the foreign LUN serial number to the lun import create
command.
Cluster01::*> lun import create -vserver vserver1 -path /vol/new_asm/LUN0 -foreign-disk 800DT$HuVWBW Cluster01::*> lun import create -vserver vserver1 -path /vol/new_asm/LUN1 -foreign-disk 800DT$HuVWBX ... Cluster01::*> lun import create -vserver vserver1 -path /vol/new_lvm/LUN8 -foreign-disk 800DT$HuVWBn Cluster01::*> lun import create -vserver vserver1 -path /vol/new_lvm/LUN9 -foreign-disk 800DT$HuVWBo Cluster01::*>
After all import relationships are established, the LUNs can be placed back online.
Cluster01::*> lun online -vserver vserver1 -path /vol/new_asm/LUN0 Cluster01::*> lun online -vserver vserver1 -path /vol/new_asm/LUN1 ... Cluster01::*> lun online -vserver vserver1 -path /vol/new_lvm/LUN8 Cluster01::*> lun online -vserver vserver1 -path /vol/new_lvm/LUN9 Cluster01::*>
Create initiator group
An initiator group (igroup) is part of the ONTAP LUN masking architecture. A newly created LUN is not accessible unless a host is first granted access. This is done by creating an igroup that lists either the FC WWNs or iSCSI initiator names that should be granted access. At the time this report was written, FLI was supported only for FC LUNs. However, converting to iSCSI postmigration is a simple task, as shown in Protocol Conversion.
In this example, an igroup is created that contains two WWNs that correspond to the two ports available on the host's HBA.
Cluster01::*> igroup create linuxhost -protocol fcp -ostype linux -initiator 21:00:00:0e:1e:16:63:50 21:00:00:0e:1e:16:63:51
Map new LUNs to host
Following igroup creation, the LUNs are then mapped to the defined igroup. These LUNs are available only to the WWNs included in this igroup. NetApp assumes at this stage in the migration process that the host has not been zoned to ONTAP. This is important because if the host is simultaneously zoned to the foreign array and the new ONTAP system, then there is a risk that LUNs bearing the same serial number could be discovered on each array. This situation could lead to multipath malfunctions or damage to data.
Cluster01::*> lun map -vserver vserver1 -path /vol/new_asm/LUN0 -igroup linuxhost Cluster01::*> lun map -vserver vserver1 -path /vol/new_asm/LUN1 -igroup linuxhost ... Cluster01::*> lun map -vserver vserver1 -path /vol/new_lvm/LUN8 -igroup linuxhost Cluster01::*> lun map -vserver vserver1 -path /vol/new_lvm/LUN9 -igroup linuxhost Cluster01::*>