SAP system clone with SnapCenter
This section provides a step-by-step description for the SAP system clone operation, which can be used to set up a repair system to address logical corruption.
The figure below summarizes the required steps for an SAP system clone operation using SnapCenter.
-
Prepare the target host.
-
SnapCenter clone create workflow for the SAP HANA shared volume.
-
Start SAP HANA services.
-
SnapCenter clone create workflow for the SAP HANA data volume including database recovery.
-
The SAP HANA system can now be used as a repair system.
If you must reset the system to a different Snapshot backup, then step 6 and step 4 are sufficient. The SAP HANA shared volume can continue to be mounted. If the system is not needed anymore, the clean-up process is performed with the following steps.
-
SnapCenter clone delete workflow for the SAP HANA data volume including database shutdown.
-
Stop SAP HANA services.
-
SnapCenter clone delete workflow for the SAP HANA shared volume.
Prerequisites and limitations
The workflows described in the following sections have a few prerequisites and limitations regarding the SAP HANA system architecture and the SnapCenter configuration.
-
The described workflow is valid for single host SAP HANA MDC systems. Multiple host systems are not supported.
-
The SnapCenter SAP HANA plug-in must be deployed on the target host to enable the execution of automation scripts.
-
The workflow has been validated for NFS. The automation
script sc-mount-volume.sh
, which is used to mount the SAP HANA shared volume, does not support FCP. This step must be either done manually or by extending the script. -
The described workflow is only valid for the SnapCenter 5.0 release or higher.
Lab setup
The figure below shows the lab setup used for system clone operation.
The following software versions were used:
-
SnapCenter 5.0
-
SAP HANA systems: HANA 2.0 SPS6 rev.61
-
SLES 15
-
ONTAP 9.7P7
All SAP HANA systems must be configured based on the configuration guide SAP HANA on NetApp AFF systems with NFS. SnapCenter and the SAP HANA resources were configured based on the best practice guide SAP HANA Backup and Recovery with SnapCenter.
Target host preparation
This section describes the preparation steps required at a server that is used as a system clone target.
During normal operation, the target host might be used for other purposes, for example, as an SAP HANA QA or test system. Therefore, most of the described steps must be executed when the system clone operation is requested. On the other hand, the relevant configuration files, like /etc/fstab
and /usr/sap/sapservices
, can be prepared and then put in production simply by copying the configuration file.
The target host preparation also includes shutting down the SAP HANA QA or test system.
Target server host name and IP address
The host name of the target server must be identical to the host name of the source system. The IP address can be different.
Proper fencing of the target server must be established so that it cannot communicate with other systems. If proper fencing is not in place, then the cloned production system might exchange data with other production systems. |
In our lab setup, we changed the host name of the target system only internally from the target system perspective. Externally the host was still accessible with the hostname hana-7. When logged into the host, the host itself is hana-1. |
Install required software
The SAP host agent software must be installed at the target server. For full information, see the SAP Host Agent at the SAP help portal.
The SnapCenter SAP HANA plug-in must be deployed on the target host using the add host operation within SnapCenter.
Configure users, ports, and SAP services
The required users and groups for the SAP HANA database must be available at the target server. Typically, central user management is used; therefore, no configuration steps are necessary at the target server. The required ports for the SAP HANA database must be configured at the target hosts. The configuration can be copied from the source system by copying the /etc/services file to the target server.
The required SAP services entries must be available at the target host. The configuration can be copied from the source system by copying the /usr/sap/sapservices
file to the target server. The following output shows the required entries for the SAP HANA database used in the lab setup.
#!/bin/sh LD_LIBRARY_PATH=/usr/sap/SS1/HDB00/exe:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH;/usr/sap/SS1/HDB00/exe/sapstartsrv pf=/usr/sap/SS1/SYS/profile/SS1_HDB00_hana-1 -D -u ss1adm limit.descriptors=1048576
Prepare log and log backup volume
Because you do not need to clone the log volume from the source system and any recovery is performed with the clear log option, an empty log volume must be prepared at the target host.
Because the source system has been configured with a separate log backup volume, an empty log backup volume must be prepared and mounted to the same mount point as at the source system.
hana-1:/# cat /etc/fstab 192.168.175.117:/SS1_repair_log_mnt00001 /hana/log/SS1/mnt00001 nfs rw,vers=3,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,nolock 0 0 192.168.175.117:/SS1_repair_log_backup /mnt/log-backup nfs rw,vers=3,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,nolock 0 0
Within the log volume hdb*, you must create subdirectories in the same way as at the source system.
hana-1:/ # ls -al /hana/log/SS1/mnt00001/ total 16 drwxrwxrwx 5 root root 4096 Dec 1 06:15 . drwxrwxrwx 1 root root 16 Nov 30 08:56 .. drwxr-xr-- 2 ss1adm sapsys 4096 Dec 1 06:14 hdb00001 drwxr-xr-- 2 ss1adm sapsys 4096 Dec 1 06:15 hdb00002.00003 drwxr-xr-- 2 ss1adm sapsys 4096 Dec 1 06:15 hdb00003.00003
Within the log backup volume, you must create subdirectories for the system and the tenant database.
hana-1:/ # ls -al /mnt/log-backup/ total 12 drwxr-xr-- 2 ss1adm sapsys 4096 Dec 1 04:48 . drwxr-xr-- 2 ss1adm sapsys 4896 Dec 1 03:42 .. drwxr-xr-- 2 ss1adm sapsys 4096 Dec 1 06:15 DB_SS1 drwxr-xr-- 2 ss1adm sapsys 4096 Dec 1 06:14 SYSTEMDB
Prepare file system mounts
You must prepare mount points for the data and the shared volume.
With our example, the directories /hana/data/SS1/mnt00001
, /hana/shared
and usr/sap/SS1
must be created.
Prepare script execution
You must add the scripts, that should be executed at the target system to the SnapCenter allowed commands config file.
hana-7:/opt/NetApp/snapcenter/scc/etc # cat /opt/NetApp/snapcenter/scc/etc/allowed_commands.config command: mount command: umount command: /mnt/sapcc-share/SAP-System-Refresh/sc-system-refresh.sh command: /mnt/sapcc-share/SAP-System-Refresh/sc-mount-volume.sh hana-7:/opt/NetApp/snapcenter/scc/etc #
Cloning the HANA shared volume
-
Select a Snapshot backup from the source system SS1 shared volume and click Clone.
-
Select the host where the target repair system has been prepared. The NFS export IP address must be the storage network interface of the target host. As target SID keep the same SID as the source system. In our example SS1.
-
Enter the mount script with the required command line options.
The SAP HANA system uses a single volume for /hana/shared
as well as for/usr/sap/SS1
, separated in subdirectories as recommended in the configuration guide SAP HANA on NetApp AFF systems with NFS. The scriptsc-mount-volume.sh
supports this configuration using a special command line option for the mount path. If the mount path command line option is equal to usr-sap-and-shared, the script mounts the subdirectories shared and usr-sap in the volume accordingly.
-
The Job Details screen in SnapCenter shows the progress of the operation.
-
The logfile of the sc-mount-volume.sh script shows the different steps executed for the mount operation.
20201201041441###hana-1###sc-mount-volume.sh: Adding entry in /etc/fstab. 20201201041441###hana-1###sc-mount-volume.sh: 192.168.175.117://SS1_shared_Clone_05132205140448713/usr-sap /usr/sap/SS1 nfs rw,vers=3,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,nolock 0 0 20201201041441###hana-1###sc-mount-volume.sh: Mounting volume: mount /usr/sap/SS1. 20201201041441###hana-1###sc-mount-volume.sh: 192.168.175.117:/SS1_shared_Clone_05132205140448713/shared /hana/shared nfs rw,vers=3,hard,timeo=600,rsize=1048576,wsize=1048576,intr,noatime,nolock 0 0 20201201041441###hana-1###sc-mount-volume.sh: Mounting volume: mount /hana/shared. 20201201041441###hana-1###sc-mount-volume.sh: usr-sap-and-shared mounted successfully. 20201201041441###hana-1###sc-mount-volume.sh: Change ownership to ss1adm.
-
When the SnapCenter workflow is finished, the /usr/sap/SS1 and the /hana/shared filesystems are mounted at the target host.
hana-1:~ # df Filesystem 1K-blocks Used Available Use% Mounted on 192.168.175.117:/SS1_repair_log_mnt00001 262144000 320 262143680 1% /hana/log/SS1/mnt00001 192.168.175.100:/sapcc_share 1020055552 53485568 966569984 6% /mnt/sapcc-share 192.168.175.117:/SS1_repair_log_backup 104857600 256 104857344 1% /mnt/log-backup 192.168.175.117:/SS1_shared_Clone_05132205140448713/usr-sap 262144064 10084608 252059456 4% /usr/sap/SS1 192.168.175.117:/SS1_shared_Clone_05132205140448713/shared 262144064 10084608 252059456 4% /hana/shared
-
Within SnapCenter, a new resource for the cloned volume is visible.
-
Now that the /hana/shared volume is available, the SAP HANA services can be started.
hana-1:/mnt/sapcc-share/SAP-System-Refresh # systemctl start sapinit
-
SAP Host Agent and sapstartsrv processes are now started.
hana-1:/mnt/sapcc-share/SAP-System-Refresh # ps -ef |grep sap root 12377 1 0 04:34 ? 00:00:00 /usr/sap/hostctrl/exe/saphostexec pf=/usr/sap/hostctrl/exe/host_profile sapadm 12403 1 0 04:34 ? 00:00:00 /usr/lib/systemd/systemd --user sapadm 12404 12403 0 04:34 ? 00:00:00 (sd-pam) sapadm 12434 1 1 04:34 ? 00:00:00 /usr/sap/hostctrl/exe/sapstartsrv pf=/usr/sap/hostctrl/exe/host_profile -D root 12485 12377 0 04:34 ? 00:00:00 /usr/sap/hostctrl/exe/saphostexec pf=/usr/sap/hostctrl/exe/host_profile root 12486 12485 0 04:34 ? 00:00:00 /usr/sap/hostctrl/exe/saposcol -l -w60 pf=/usr/sap/hostctrl/exe/host_profile ss1adm 12504 1 0 04:34 ? 00:00:00 /usr/sap/SS1/HDB00/exe/sapstartsrv pf=/usr/sap/SS1/SYS/profile/SS1_HDB00_hana-1 -D -u ss1adm root 12582 12486 0 04:34 ? 00:00:00 /usr/sap/hostctrl/exe/saposcol -l -w60 pf=/usr/sap/hostctrl/exe/host_profile root 12585 7613 0 04:34 pts/0 00:00:00 grep --color=auto sap hana-1:/mnt/sapcc-share/SAP-System-Refresh #
Cloning additional SAP application services
Additional SAP application services are cloned in the same way as the SAP HANA shared volume as described in the section “Cloning the SAP HANA shared volume.” Of course, the required storage volume(s) of the SAP application servers must be protected with SnapCenter as well.
You must add the required services entries to /usr/sap/sapservices, and the ports, users, and the file system mount points (for example, /usr/sap/SID) must be prepared.
Cloning the data volume and recovery of the HANA database
-
Select an SAP HANA Snapshot backup from the source system SS1.
-
Select the host where the target repair system has been prepared. The NFS export IP address must be the storage network interface of the target host. As target SID keep the same SID as the source system. In our example SS1
-
Enter the post-clone scripts with the required command line options.
The script for the recovery operation recovers the SAP HANA database to the point in time of the Snapshot operation and does not execute any forward recovery. If a forward recovery to a specific point in time is required, the recovery must be performed manually. A manual forward recovery also requires that the log backups from the source system are available at the target host.
The job details screen in SnapCenter shows the progress of the operation.
The logfile of the sc-system-refresh
script shows the different steps that are executed for the mount and the recovery operation.
20201201052124###hana-1###sc-system-refresh.sh: Recover system database. 20201201052124###hana-1###sc-system-refresh.sh: /usr/sap/SS1/HDB00/exe/Python/bin/python /usr/sap/SS1/HDB00/exe/python_support/recoverSys.py --command "RECOVER DATA USING SNAPSHOT CLEAR LOG" 20201201052156###hana-1###sc-system-refresh.sh: Wait until SAP HANA database is started .... 20201201052156###hana-1###sc-system-refresh.sh: Status: GRAY 20201201052206###hana-1###sc-system-refresh.sh: Status: GREEN 20201201052206###hana-1###sc-system-refresh.sh: SAP HANA database is started. 20201201052206###hana-1###sc-system-refresh.sh: Source system has a single tenant and tenant name is identical to source SID: SS1 20201201052206###hana-1###sc-system-refresh.sh: Target tenant will have the same name as target SID: SS1. 20201201052206###hana-1###sc-system-refresh.sh: Recover tenant database SS1. 20201201052206###hana-1###sc-system-refresh.sh: /usr/sap/SS1/SYS/exe/hdb/hdbsql -U SS1KEY RECOVER DATA FOR SS1 USING SNAPSHOT CLEAR LOG 0 rows affected (overall time 34.773885 sec; server time 34.772398 sec) 20201201052241###hana-1###sc-system-refresh.sh: Checking availability of Indexserver for tenant SS1. 20201201052241###hana-1###sc-system-refresh.sh: Recovery of tenant database SS1 succesfully finished. 20201201052241###hana-1###sc-system-refresh.sh: Status: GREEN After the recovery operation, the HANA database is running and the data volume is mounted at the target host. hana-1:/mnt/log-backup # df Filesystem 1K-blocks Used Available Use% Mounted on 192.168.175.117:/SS1_repair_log_mnt00001 262144000 760320 261383680 1% /hana/log/SS1/mnt00001 192.168.175.100:/sapcc_share 1020055552 53486592 966568960 6% /mnt/sapcc-share 192.168.175.117:/SS1_repair_log_backup 104857600 512 104857088 1% /mnt/log-backup 192.168.175.117:/SS1_shared_Clone_05132205140448713/usr-sap 262144064 10090496 252053568 4% /usr/sap/SS1 192.168.175.117:/SS1_shared_Clone_05132205140448713/shared 262144064 10090496 252053568 4% /hana/shared 192.168.175.117:/SS1_data_mnt00001_Clone_0421220520054605 262144064 3732864 258411200 2% /hana/data/SS1/mnt00001
The SAP HANA system is now available and can be used, for example, as a repair system.