Log shipping
The goal of a migration using log shipping is to create a copy of the original data files at a new location and then establish a method of shipping changes into the new environment.
Once established, log shipment and replay can be automated to keep the replica database largely in sync with the source. For example, a cron job can be scheduled to (a) copy the most recent logs to the new location and (b) replay them every 15 minutes. Doing so provides minimal disruption at the time of cutover because no more than 15 minutes of archive logs must be replayed.
The procedure shown below is also essentially a database clone operation. The logic shown is similar to the engine within NetApp SnapManager for Oracle (SMO) and the NetApp SnapCenter Oracle Plug- in. Some customers have used the procedure shown within scripts or WFA workflows for custom cloning operations. Although this procedure is more manual than using either SMO or SnapCenter, it is still readily scripted and the data management APIs within ONTAP further simplify the process.
Log shipping - file system to file system
This example demonstrates the migration of a database called WAFFLE from an ordinary file system to another ordinary file system located on a different server. It also illustrates the use of SnapMirror to make a rapid copy of data files, but this is not an integral part of the overall procedure.
Create database backup
The first step is to create a database backup. Specifically, this procedure requires a set of data files that can be used for archive log replay.
Environment
In this example, the source database is on an ONTAP system. The simplest method to create a backup of a database is by using a snapshot. The database is placed in hot backup mode for a few seconds while a snapshot create
operation is executed on the volume hosting the data files.
SQL> alter database begin backup; Database altered.
Cluster01::*> snapshot create -vserver vserver1 -volume jfsc1_oradata hotbackup Cluster01::*>
SQL> alter database end backup; Database altered.
The result is a snapshot on disk called hotbackup
that contains an image of the data files while in hot backup mode. When combined with the appropriate archive logs to make the data files consistent, the data in this snapshot can be used as the basis of a restore or a clone. In this case, it is replicated to the new server.
Restore to new environment
The backup must now be restored in the new environment. This can be done in a number of ways, including Oracle RMAN, restoration from a backup application like NetBackup, or a simple copy operation of data files that were placed in hot backup mode.
In this example, SnapMirror is used to replicate the snapshot hotbackup to a new location.
-
Create a new volume to receive the snapshot data. Initialize the mirroring from
jfsc1_oradata
tovol_oradata
.Cluster01::*> volume create -vserver vserver1 -volume vol_oradata -aggregate data_01 -size 20g -state online -type DP -snapshot-policy none -policy jfsc3 [Job 833] Job succeeded: Successful
Cluster01::*> snapmirror initialize -source-path vserver1:jfsc1_oradata -destination-path vserver1:vol_oradata Operation is queued: snapmirror initialize of destination "vserver1:vol_oradata". Cluster01::*> volume mount -vserver vserver1 -volume vol_oradata -junction-path /vol_oradata Cluster01::*>
-
After the state is set by SnapMirror, indicating that synchronization is complete, update the mirror based specifically on the desired snapshot.
Cluster01::*> snapmirror show -destination-path vserver1:vol_oradata -fields state source-path destination-path state ----------------------- ----------------------- ------------ vserver1:jfsc1_oradata vserver1:vol_oradata SnapMirrored
Cluster01::*> snapmirror update -destination-path vserver1:vol_oradata -source-snapshot hotbackup Operation is queued: snapmirror update of destination "vserver1:vol_oradata".
-
Successful synchronization can be verified by viewing the
newest-snapshot
field on the mirror volume.Cluster01::*> snapmirror show -destination-path vserver1:vol_oradata -fields newest-snapshot source-path destination-path newest-snapshot ----------------------- ----------------------- --------------- vserver1:jfsc1_oradata vserver1:vol_oradata hotbackup
-
The mirror can then be broken.
Cluster01::> snapmirror break -destination-path vserver1:vol_oradata Operation succeeded: snapmirror break for destination "vserver1:vol_oradata". Cluster01::>
-
Mount the new file system.With block-based file systems, the precise procedures vary based on the LVM in use. FC zoning or iSCSI connections must be configured. After connectivity to the LUNs is established, commands such as Linux
pvscan
might be needed to discover which volume groups or LUNs need to be properly configured to be discoverable by ASM.In this example, a simple NFS file system is used. This file system can be mounted directly.
fas8060-nfs1:/vol_oradata 19922944 1639360 18283584 9% /oradata fas8060-nfs1:/vol_logs 9961472 128 9961344 1% /logs
Create controlfile creation template
You must next create a controlfile template. The backup controlfile to trace
command creates text commands to recreate a controlfile. This function can be useful for restoring a database from backup under some circumstances, and it is often used with scripts that perform tasks such as database cloning.
-
The output of the following command is used to recreate the controlfiles for the migrated database.
SQL> alter database backup controlfile to trace as '/tmp/waffle.ctrl'; Database altered.
-
After the controlfiles have been created, copy the file to the new server.
[oracle@jfsc3 tmp]$ scp oracle@jfsc1:/tmp/waffle.ctrl /tmp/ oracle@jfsc1's password: waffle.ctrl 100% 5199 5.1KB/s 00:00
Backup parameter file
A parameter file is also required in the new environment. The simplest method is to create a pfile from the current spfile or pfile. In this example, the source database is using an spfile.
SQL> create pfile='/tmp/waffle.tmp.pfile' from spfile; File created.
Create oratab entry
The creation of an oratab entry is required for the proper functioning of utilities such as oraenv. To create an oratab entry, complete the following step.
WAFFLE:/orabin/product/12.1.0/dbhome_1:N
Prepare directory structure
If the required directories were not already present, you must create them or the database startup procedure fails. To prepare the directory structure, complete the following minimum requirements.
[oracle@jfsc3 ~]$ . oraenv ORACLE_SID = [oracle] ? WAFFLE The Oracle base has been set to /orabin [oracle@jfsc3 ~]$ cd $ORACLE_BASE [oracle@jfsc3 orabin]$ cd admin [oracle@jfsc3 admin]$ mkdir WAFFLE [oracle@jfsc3 admin]$ cd WAFFLE [oracle@jfsc3 WAFFLE]$ mkdir adump dpdump pfile scripts xdb_wallet
Parameter file updates
-
To copy the parameter file to the new server, run the following commands. The default location is the
$ORACLE_HOME/dbs
directory. In this case, the pfile can be placed anywhere. It is only being used as an intermediate step in the migration process.
[oracle@jfsc3 admin]$ scp oracle@jfsc1:/tmp/waffle.tmp.pfile $ORACLE_HOME/dbs/waffle.tmp.pfile oracle@jfsc1's password: waffle.pfile 100% 916 0.9KB/s 00:00
-
Edit the file as required. For example, if the archive log location has changed, the pfile must be altered to reflect the new location. In this example, only the controlfiles are being relocated, in part to distribute them between the log and data file systems.
[root@jfsc1 tmp]# cat waffle.pfile WAFFLE.__data_transfer_cache_size=0 WAFFLE.__db_cache_size=507510784 WAFFLE.__java_pool_size=4194304 WAFFLE.__large_pool_size=20971520 WAFFLE.__oracle_base='/orabin'#ORACLE_BASE set from environment WAFFLE.__pga_aggregate_target=268435456 WAFFLE.__sga_target=805306368 WAFFLE.__shared_io_pool_size=29360128 WAFFLE.__shared_pool_size=234881024 WAFFLE.__streams_pool_size=0 *.audit_file_dest='/orabin/admin/WAFFLE/adump' *.audit_trail='db' *.compatible='12.1.0.2.0' *.control_files='/oradata//WAFFLE/control01.ctl','/oradata//WAFFLE/control02.ctl' *.control_files='/oradata/WAFFLE/control01.ctl','/logs/WAFFLE/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='WAFFLE' *.diagnostic_dest='/orabin' *.dispatchers='(PROTOCOL=TCP) (SERVICE=WAFFLEXDB)' *.log_archive_dest_1='LOCATION=/logs/WAFFLE/arch' *.log_archive_format='%t_%s_%r.dbf' *.open_cursors=300 *.pga_aggregate_target=256m *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=768m *.undo_tablespace='UNDOTBS1'
-
After the edits are complete, create an spfile based on this pfile.
SQL> create spfile from pfile='waffle.tmp.pfile'; File created.
Recreate controlfiles
In a previous step, the output of backup controlfile to trace
was copied to the new server. The specific portion of the output required is the controlfile recreation
command. This information can be found in the file under the section marked Set #1. NORESETLOGS
. It starts with the line create controlfile reuse database
and should include the word noresetlogs
. It ends with the semicolon (; ) character.
-
In this example procedure, the file reads as follows.
CREATE CONTROLFILE REUSE DATABASE "WAFFLE" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292 LOGFILE GROUP 1 '/logs/WAFFLE/redo/redo01.log' SIZE 50M BLOCKSIZE 512, GROUP 2 '/logs/WAFFLE/redo/redo02.log' SIZE 50M BLOCKSIZE 512, GROUP 3 '/logs/WAFFLE/redo/redo03.log' SIZE 50M BLOCKSIZE 512 -- STANDBY LOGFILE DATAFILE '/oradata/WAFFLE/system01.dbf', '/oradata/WAFFLE/sysaux01.dbf', '/oradata/WAFFLE/undotbs01.dbf', '/oradata/WAFFLE/users01.dbf' CHARACTER SET WE8MSWIN1252 ;
-
Edit this script as desired to reflect the new location of the various files. For example, certain data files known to support high I/O might be redirected to a file system on a high- performance storage tier. In other cases, the changes might be purely for administrator reasons, such as isolating the data files of a given PDB in dedicated volumes.
-
In this example, the
DATAFILE
stanza is left unchanged, but the redo logs are moved to a new location in/redo
rather than sharing space with archive logs in/logs
.CREATE CONTROLFILE REUSE DATABASE "WAFFLE" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292 LOGFILE GROUP 1 '/redo/redo01.log' SIZE 50M BLOCKSIZE 512, GROUP 2 '/redo/redo02.log' SIZE 50M BLOCKSIZE 512, GROUP 3 '/redo/redo03.log' SIZE 50M BLOCKSIZE 512 -- STANDBY LOGFILE DATAFILE '/oradata/WAFFLE/system01.dbf', '/oradata/WAFFLE/sysaux01.dbf', '/oradata/WAFFLE/undotbs01.dbf', '/oradata/WAFFLE/users01.dbf' CHARACTER SET WE8MSWIN1252 ;
SQL> startup nomount; ORACLE instance started. Total System Global Area 805306368 bytes Fixed Size 2929552 bytes Variable Size 331353200 bytes Database Buffers 465567744 bytes Redo Buffers 5455872 bytes SQL> CREATE CONTROLFILE REUSE DATABASE "WAFFLE" NORESETLOGS ARCHIVELOG 2 MAXLOGFILES 16 3 MAXLOGMEMBERS 3 4 MAXDATAFILES 100 5 MAXINSTANCES 8 6 MAXLOGHISTORY 292 7 LOGFILE 8 GROUP 1 '/redo/redo01.log' SIZE 50M BLOCKSIZE 512, 9 GROUP 2 '/redo/redo02.log' SIZE 50M BLOCKSIZE 512, 10 GROUP 3 '/redo/redo03.log' SIZE 50M BLOCKSIZE 512 11 -- STANDBY LOGFILE 12 DATAFILE 13 '/oradata/WAFFLE/system01.dbf', 14 '/oradata/WAFFLE/sysaux01.dbf', 15 '/oradata/WAFFLE/undotbs01.dbf', 16 '/oradata/WAFFLE/users01.dbf' 17 CHARACTER SET WE8MSWIN1252 18 ; Control file created. SQL>
If any files are misplaced or parameters are misconfigured, errors are generated that indicate what must be fixed. The database is mounted, but it is not yet open and cannot be opened because the data files in use are still marked as being in hot backup mode. Archive logs must first be applied to make the database consistent.
Initial log replication
At least one log reply operation is required to make the data files consistent. Many options are available to replay logs. In some cases, the original archive log location on the original server can be shared through NFS, and log reply can be done directly. In other cases, the archive logs must be copied.
For example, a simple scp
operation can copy all current logs from the source server to the migration server:
[oracle@jfsc3 arch]$ scp jfsc1:/logs/WAFFLE/arch/* ./ oracle@jfsc1's password: 1_22_912662036.dbf 100% 47MB 47.0MB/s 00:01 1_23_912662036.dbf 100% 40MB 40.4MB/s 00:00 1_24_912662036.dbf 100% 45MB 45.4MB/s 00:00 1_25_912662036.dbf 100% 41MB 40.9MB/s 00:01 1_26_912662036.dbf 100% 39MB 39.4MB/s 00:00 1_27_912662036.dbf 100% 39MB 38.7MB/s 00:00 1_28_912662036.dbf 100% 40MB 40.1MB/s 00:01 1_29_912662036.dbf 100% 17MB 16.9MB/s 00:00 1_30_912662036.dbf 100% 636KB 636.0KB/s 00:00
Initial log replay
After the files are in the archive log location, they can be replayed by issuing the command recover database until cancel
followed by the response AUTO
to automatically replay all available logs.
SQL> recover database until cancel; ORA-00279: change 382713 generated at 05/24/2016 09:00:54 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_23_912662036.dbf ORA-00280: change 382713 for thread 1 is in sequence #23 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} AUTO ORA-00279: change 405712 generated at 05/24/2016 15:01:05 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_24_912662036.dbf ORA-00280: change 405712 for thread 1 is in sequence #24 ORA-00278: log file '/logs/WAFFLE/arch/1_23_912662036.dbf' no longer needed for this recovery ... ORA-00279: change 713874 generated at 05/26/2016 04:26:43 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_31_912662036.dbf ORA-00280: change 713874 for thread 1 is in sequence #31 ORA-00278: log file '/logs/WAFFLE/arch/1_30_912662036.dbf' no longer needed for this recovery ORA-00308: cannot open archived log '/logs/WAFFLE/arch/1_31_912662036.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3
The final archive log reply reports an error, but this is normal. The log indicates that sqlplus
was seeking a particular log file and did not find it. The reason is, most likely, that the log file does not exist yet.
If the source database can be shut down before copying archive logs, this step must be performed only once. The archive logs are copied and replayed, and then the process can continue directly to the cutover process that replicates the critical redo logs.
Incremental log replication and replay
In most cases, migration is not performed right away. It could be days or even weeks before the migration process is completed, which means that the logs must be continuously shipped to the replica database and replayed. Therefore, when cutover arrives, minimal data must be transferred and replayed.
Doing so can be scripted in many ways, but one of the more popular methods is using rsync, a common file replication utility. The safest way to use this utility is to configure it as a daemon. For example, the rsyncd.conf
file that follows shows how to create a resource called waffle.arch
that is accessed with Oracle user credentials and is mapped to /logs/WAFFLE/arch
. Most importantly, the resource is set to read-only, which allows the production data to be read but not altered.
[root@jfsc1 arch]# cat /etc/rsyncd.conf [waffle.arch] uid=oracle gid=dba path=/logs/WAFFLE/arch read only = true [root@jfsc1 arch]# rsync --daemon
The following command synchronizes the new server's archive log destination against the rsync resource waffle.arch
on the original server. The t
argument in rsync - potg
causes the file list to be compared based on timestamp, and only new files are copied. This process provides an incremental update of the new server. This command can also be scheduled in cron to run on a regular basis.
[oracle@jfsc3 arch]$ rsync -potg --stats --progress jfsc1::waffle.arch/* /logs/WAFFLE/arch/ 1_31_912662036.dbf 650240 100% 124.02MB/s 0:00:00 (xfer#1, to-check=8/18) 1_32_912662036.dbf 4873728 100% 110.67MB/s 0:00:00 (xfer#2, to-check=7/18) 1_33_912662036.dbf 4088832 100% 50.64MB/s 0:00:00 (xfer#3, to-check=6/18) 1_34_912662036.dbf 8196096 100% 54.66MB/s 0:00:00 (xfer#4, to-check=5/18) 1_35_912662036.dbf 19376128 100% 57.75MB/s 0:00:00 (xfer#5, to-check=4/18) 1_36_912662036.dbf 71680 100% 201.15kB/s 0:00:00 (xfer#6, to-check=3/18) 1_37_912662036.dbf 1144320 100% 3.06MB/s 0:00:00 (xfer#7, to-check=2/18) 1_38_912662036.dbf 35757568 100% 63.74MB/s 0:00:00 (xfer#8, to-check=1/18) 1_39_912662036.dbf 984576 100% 1.63MB/s 0:00:00 (xfer#9, to-check=0/18) Number of files: 18 Number of files transferred: 9 Total file size: 399653376 bytes Total transferred file size: 75143168 bytes Literal data: 75143168 bytes Matched data: 0 bytes File list size: 474 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 204 Total bytes received: 75153219 sent 204 bytes received 75153219 bytes 150306846.00 bytes/sec total size is 399653376 speedup is 5.32
After the logs have been received, they must be replayed. Previous examples show the use of sqlplus to manually run recover database until cancel
, a process that can easily be automated. The example shown here uses the script described in Replay Logs on Database. The scripts accept an argument that specifies the database requiring a replay operation. This permits the same script to be used in a multidatabase migration effort.
[oracle@jfsc3 logs]$ ./replay.logs.pl WAFFLE ORACLE_SID = [WAFFLE] ? The Oracle base remains unchanged with value /orabin SQL*Plus: Release 12.1.0.2.0 Production on Thu May 26 10:47:16 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> ORA-00279: change 713874 generated at 05/26/2016 04:26:43 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_31_912662036.dbf ORA-00280: change 713874 for thread 1 is in sequence #31 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} ORA-00279: change 814256 generated at 05/26/2016 04:52:30 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_32_912662036.dbf ORA-00280: change 814256 for thread 1 is in sequence #32 ORA-00278: log file '/logs/WAFFLE/arch/1_31_912662036.dbf' no longer needed for this recovery ORA-00279: change 814780 generated at 05/26/2016 04:53:04 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_33_912662036.dbf ORA-00280: change 814780 for thread 1 is in sequence #33 ORA-00278: log file '/logs/WAFFLE/arch/1_32_912662036.dbf' no longer needed for this recovery ... ORA-00279: change 1120099 generated at 05/26/2016 09:59:21 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_40_912662036.dbf ORA-00280: change 1120099 for thread 1 is in sequence #40 ORA-00278: log file '/logs/WAFFLE/arch/1_39_912662036.dbf' no longer needed for this recovery ORA-00308: cannot open archived log '/logs/WAFFLE/arch/1_40_912662036.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3 SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Cutover
When you are ready to cut over to the new environment, you must perform one final synchronization that includes both archive logs and the redo logs. If the original redo log location is not already known, it can be identified as follows:
SQL> select member from v$logfile; MEMBER -------------------------------------------------------------------------------- /logs/WAFFLE/redo/redo01.log /logs/WAFFLE/redo/redo02.log /logs/WAFFLE/redo/redo03.log
-
Shut down the source database.
-
Perform one final synchronization of the archive logs on the new server with the desired method.
-
The source redo logs must be copied to the new server. In this example, the redo logs were relocated to a new directory at
/redo
.[oracle@jfsc3 logs]$ scp jfsc1:/logs/WAFFLE/redo/* /redo/ oracle@jfsc1's password: redo01.log 100% 50MB 50.0MB/s 00:01 redo02.log 100% 50MB 50.0MB/s 00:00 redo03.log 100% 50MB 50.0MB/s 00:00
-
At this stage, the new database environment contains all of the files required to bring it to the exact same state as the source. The archive logs must be replayed one final time.
SQL> recover database until cancel; ORA-00279: change 1120099 generated at 05/26/2016 09:59:21 needed for thread 1 ORA-00289: suggestion : /logs/WAFFLE/arch/1_40_912662036.dbf ORA-00280: change 1120099 for thread 1 is in sequence #40 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} AUTO ORA-00308: cannot open archived log '/logs/WAFFLE/arch/1_40_912662036.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3 ORA-00308: cannot open archived log '/logs/WAFFLE/arch/1_40_912662036.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3
-
Once complete, the redo logs must be replayed. If the message
Media recovery complete
is returned, the process is successful and the databases are synchronized and can be opened.SQL> recover database; Media recovery complete. SQL> alter database open; Database altered.
Log shipping - ASM to file system
This example demonstrates the use of Oracle RMAN to migrate a database. It is very similar to the prior example of file system to file system log shipping, but the files on ASM are not visible to the host. The only options for migrating data located on ASM devices is either by relocating the ASM LUN or by using Oracle RMAN to perform the copy operations.
Although RMAN is a requirement for copying files from Oracle ASM, the use of RMAN is not limited to ASM. RMAN can be used to migrate from any type of storage to any other type.
This example shows the relocation of a database called PANCAKE from ASM storage to a regular file system located on a different server at paths /oradata
and /logs
.
Create database backup
The first step is to create a backup of the database to be migrated to an alternate server. Because the source uses Oracle ASM, RMAN must be used. A simple RMAN backup can be performed as follows. This method creates a tagged backup that can be easily identified by RMAN later in the procedure.
The first command defines the type of destination for the backup and the location to be used. The second initiates the backup of the data files only.
RMAN> configure channel device type disk format '/rman/pancake/%U'; using target database control file instead of recovery catalog old RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/%U'; new RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/%U'; new RMAN configuration parameters are successfully stored RMAN> backup database tag 'ONTAP_MIGRATION'; Starting backup at 24-MAY-16 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=251 device type=DISK channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=+ASM0/PANCAKE/system01.dbf input datafile file number=00002 name=+ASM0/PANCAKE/sysaux01.dbf input datafile file number=00003 name=+ASM0/PANCAKE/undotbs101.dbf input datafile file number=00004 name=+ASM0/PANCAKE/users01.dbf channel ORA_DISK_1: starting piece 1 at 24-MAY-16 channel ORA_DISK_1: finished piece 1 at 24-MAY-16 piece handle=/rman/pancake/1gr6c161_1_1 tag=ONTAP_MIGRATION comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set including current SPFILE in backup set channel ORA_DISK_1: starting piece 1 at 24-MAY-16 channel ORA_DISK_1: finished piece 1 at 24-MAY-16 piece handle=/rman/pancake/1hr6c164_1_1 tag=ONTAP_MIGRATION comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 24-MAY-16
Backup controlfile
A backup controlfile is required later in the procedure for the duplicate database
operation.
RMAN> backup current controlfile format '/rman/pancake/ctrl.bkp'; Starting backup at 24-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set channel ORA_DISK_1: starting piece 1 at 24-MAY-16 channel ORA_DISK_1: finished piece 1 at 24-MAY-16 piece handle=/rman/pancake/ctrl.bkp tag=TAG20160524T032651 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 24-MAY-16
Backup parameter file
A parameter file is also required in the new environment. The simplest method is to create a pfile from the current spfile or pfile. In this example, the source database uses an spfile.
RMAN> create pfile='/rman/pancake/pfile' from spfile; Statement processed
ASM file rename script
Several file locations currently defined in the controlfiles change when the database is moved. The following script creates an RMAN script to make the process easier. This example shows a database with a very small number of data files, but typically databases contain hundreds or even thousands of data files.
This script can be found in ASM to File System Name Conversion and it does two things.
First, it creates a parameter to redefine the redo log locations called log_file_name_convert
. It is essentially a list of alternating fields. The first field is the location of a current redo log, and the second field is the location on the new server. The pattern is then repeated.
The second function is to supply a template for data file renaming. The script loops through the data files, pulls the name and file number information, and formats it as an RMAN script. Then it does the same with the temp files. The result is a simple rman script that can be edited as desired to make sure that the files are restored to the desired location.
SQL> @/rman/mk.rename.scripts.sql Parameters for log file conversion: *.log_file_name_convert = '+ASM0/PANCAKE/redo01.log', '/NEW_PATH/redo01.log','+ASM0/PANCAKE/redo02.log', '/NEW_PATH/redo02.log','+ASM0/PANCAKE/redo03.log', '/NEW_PATH/redo03.log' rman duplication script: run { set newname for datafile 1 to '+ASM0/PANCAKE/system01.dbf'; set newname for datafile 2 to '+ASM0/PANCAKE/sysaux01.dbf'; set newname for datafile 3 to '+ASM0/PANCAKE/undotbs101.dbf'; set newname for datafile 4 to '+ASM0/PANCAKE/users01.dbf'; set newname for tempfile 1 to '+ASM0/PANCAKE/temp01.dbf'; duplicate target database for standby backup location INSERT_PATH_HERE; } PL/SQL procedure successfully completed.
Capture the output of this screen. The log_file_name_convert
parameter is placed in the pfile as described below. The RMAN data file rename and duplicate script must be edited accordingly to place the data files in the desired locations. In this example, they are all placed in /oradata/pancake
.
run { set newname for datafile 1 to '/oradata/pancake/pancake.dbf'; set newname for datafile 2 to '/oradata/pancake/sysaux.dbf'; set newname for datafile 3 to '/oradata/pancake/undotbs1.dbf'; set newname for datafile 4 to '/oradata/pancake/users.dbf'; set newname for tempfile 1 to '/oradata/pancake/temp.dbf'; duplicate target database for standby backup location '/rman/pancake'; }
Prepare directory structure
The scripts are almost ready to execute, but first the directory structure must be in place. If the required directories are not already present, they must be created or the database startup procedure fails. The example below reflects the minimum requirements.
[oracle@jfsc2 ~]$ mkdir /oradata/pancake [oracle@jfsc2 ~]$ mkdir /logs/pancake [oracle@jfsc2 ~]$ cd /orabin/admin [oracle@jfsc2 admin]$ mkdir PANCAKE [oracle@jfsc2 admin]$ cd PANCAKE [oracle@jfsc2 PANCAKE]$ mkdir adump dpdump pfile scripts xdb_wallet
Create oratab entry
The following command is required for utilities such as oraenv to work properly.
PANCAKE:/orabin/product/12.1.0/dbhome_1:N
Parameter updates
The saved pfile must be updated to reflect any path changes on the new server. The data file path changes are changed by the RMAN duplication script, and nearly all databases require changes to the control_files
and log_archive_dest
parameters. There might also be audit file locations that must be changed, and parameters such as db_create_file_dest
might not be relevant outside of ASM. An experienced DBA should carefully review the proposed changes before proceeding.
In this example, the key changes are the controlfile locations, the log archive destination, and the addition of the log_file_name_convert
parameter.
PANCAKE.__data_transfer_cache_size=0 PANCAKE.__db_cache_size=545259520 PANCAKE.__java_pool_size=4194304 PANCAKE.__large_pool_size=25165824 PANCAKE.__oracle_base='/orabin'#ORACLE_BASE set from environment PANCAKE.__pga_aggregate_target=268435456 PANCAKE.__sga_target=805306368 PANCAKE.__shared_io_pool_size=29360128 PANCAKE.__shared_pool_size=192937984 PANCAKE.__streams_pool_size=0 *.audit_file_dest='/orabin/admin/PANCAKE/adump' *.audit_trail='db' *.compatible='12.1.0.2.0' *.control_files='+ASM0/PANCAKE/control01.ctl','+ASM0/PANCAKE/control02.ctl' *.control_files='/oradata/pancake/control01.ctl','/logs/pancake/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='PANCAKE' *.diagnostic_dest='/orabin' *.dispatchers='(PROTOCOL=TCP) (SERVICE=PANCAKEXDB)' *.log_archive_dest_1='LOCATION=+ASM1' *.log_archive_dest_1='LOCATION=/logs/pancake' *.log_archive_format='%t_%s_%r.dbf' '/logs/path/redo02.log' *.log_file_name_convert = '+ASM0/PANCAKE/redo01.log', '/logs/pancake/redo01.log', '+ASM0/PANCAKE/redo02.log', '/logs/pancake/redo02.log', '+ASM0/PANCAKE/redo03.log', '/logs/pancake/redo03.log' *.open_cursors=300 *.pga_aggregate_target=256m *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=768m *.undo_tablespace='UNDOTBS1'
After the new parameters are confirmed, the parameters must be put into effect. Multiple options exist, but most customers create an spfile based on the text pfile.
bash-4.1$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 8 11:17:40 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to an idle instance. SQL> create spfile from pfile='/rman/pancake/pfile'; File created.
Startup nomount
The final step before replicating the database is to bring up the database processes but not mount the files. In this step, problems with the spfile might become evident. If the startup nomount
command fails because of a parameter error, it is simple to shut down, correct the pfile template, reload it as an spfile, and try again.
SQL> startup nomount; ORACLE instance started. Total System Global Area 805306368 bytes Fixed Size 2929552 bytes Variable Size 373296240 bytes Database Buffers 423624704 bytes Redo Buffers 5455872 bytes
Duplicate the database
Restoring the prior RMAN backup to the new location consumes more time than other steps in this process. The database must be duplicated without a change to the database ID (DBID) or resetting the logs. This prevents logs from being applied, which is a required step to fully synchronize the copies.
Connect to the database with RMAN as aux and issue the duplicate database command by using the script created in a previous step.
[oracle@jfsc2 pancake]$ rman auxiliary / Recovery Manager: Release 12.1.0.2.0 - Production on Tue May 24 03:04:56 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. connected to auxiliary database: PANCAKE (not mounted) RMAN> run 2> { 3> set newname for datafile 1 to '/oradata/pancake/pancake.dbf'; 4> set newname for datafile 2 to '/oradata/pancake/sysaux.dbf'; 5> set newname for datafile 3 to '/oradata/pancake/undotbs1.dbf'; 6> set newname for datafile 4 to '/oradata/pancake/users.dbf'; 7> set newname for tempfile 1 to '/oradata/pancake/temp.dbf'; 8> duplicate target database for standby backup location '/rman/pancake'; 9> } executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting Duplicate Db at 24-MAY-16 contents of Memory Script: { restore clone standby controlfile from '/rman/pancake/ctrl.bkp'; } executing Memory Script Starting restore at 24-MAY-16 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=243 device type=DISK channel ORA_AUX_DISK_1: restoring control file channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01 output file name=/oradata/pancake/control01.ctl output file name=/logs/pancake/control02.ctl Finished restore at 24-MAY-16 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database released channel: ORA_AUX_DISK_1 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=243 device type=DISK contents of Memory Script: { set newname for tempfile 1 to "/oradata/pancake/temp.dbf"; switch clone tempfile all; set newname for datafile 1 to "/oradata/pancake/pancake.dbf"; set newname for datafile 2 to "/oradata/pancake/sysaux.dbf"; set newname for datafile 3 to "/oradata/pancake/undotbs1.dbf"; set newname for datafile 4 to "/oradata/pancake/users.dbf"; restore clone database ; } executing Memory Script executing command: SET NEWNAME renamed tempfile 1 to /oradata/pancake/temp.dbf in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 24-MAY-16 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backup set restore channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set channel ORA_AUX_DISK_1: restoring datafile 00001 to /oradata/pancake/pancake.dbf channel ORA_AUX_DISK_1: restoring datafile 00002 to /oradata/pancake/sysaux.dbf channel ORA_AUX_DISK_1: restoring datafile 00003 to /oradata/pancake/undotbs1.dbf channel ORA_AUX_DISK_1: restoring datafile 00004 to /oradata/pancake/users.dbf channel ORA_AUX_DISK_1: reading from backup piece /rman/pancake/1gr6c161_1_1 channel ORA_AUX_DISK_1: piece handle=/rman/pancake/1gr6c161_1_1 tag=ONTAP_MIGRATION channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07 Finished restore at 24-MAY-16 contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy RECID=5 STAMP=912655725 file name=/oradata/pancake/pancake.dbf datafile 2 switched to datafile copy input datafile copy RECID=6 STAMP=912655725 file name=/oradata/pancake/sysaux.dbf datafile 3 switched to datafile copy input datafile copy RECID=7 STAMP=912655725 file name=/oradata/pancake/undotbs1.dbf datafile 4 switched to datafile copy input datafile copy RECID=8 STAMP=912655725 file name=/oradata/pancake/users.dbf Finished Duplicate Db at 24-MAY-16
Initial log replication
You must now ship the changes from the source database to a new location. Doing so might require a combination of steps. The simplest method would be to have RMAN on the source database write out archive logs to a shared network connection. If a shared location is not available, an alternative method is using RMAN to write to a local file system and then using rcp or rsync to copy the files.
In this example, the /rman
directory is an NFS share that is available to both the original and migrated database.
One important issue here is the disk format
clause. The disk format of the backup is %h_%e_%a.dbf
, which means that you must use the format of thread number, sequence number, and activation ID for the database. Although the letters are different, this matches the log_archive_format='%t_%s_%r.dbf
parameter in the pfile. This parameter also specifies archive logs in the format of thread number, sequence number, and activation ID. The end result is that the log file backups on the source use a naming convention that is expected by the database. Doing so makes operations such as recover database
much simpler because sqlplus correctly anticipates the names of the archive logs to be replayed.
RMAN> configure channel device type disk format '/rman/pancake/logship/%h_%e_%a.dbf'; old RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/arch/%h_%e_%a.dbf'; new RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/logship/%h_%e_%a.dbf'; new RMAN configuration parameters are successfully stored released channel: ORA_DISK_1 RMAN> backup as copy archivelog from time 'sysdate-2'; Starting backup at 24-MAY-16 current log archived allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=373 device type=DISK channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=54 RECID=70 STAMP=912658508 output file name=/rman/pancake/logship/1_54_912576125.dbf RECID=123 STAMP=912659482 channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01 channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=41 RECID=29 STAMP=912654101 output file name=/rman/pancake/logship/1_41_912576125.dbf RECID=124 STAMP=912659483 channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01 ... channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=45 RECID=33 STAMP=912654688 output file name=/rman/pancake/logship/1_45_912576125.dbf RECID=152 STAMP=912659514 channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01 channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=47 RECID=36 STAMP=912654809 output file name=/rman/pancake/logship/1_47_912576125.dbf RECID=153 STAMP=912659515 channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01 Finished backup at 24-MAY-16
Initial log replay
After the files are in the archive log location, they can be replayed by issuing the command recover database until cancel
followed by the response AUTO
to automatically replay all available logs. The parameter file is currently directing archive logs to /logs/archive
, but this does not match the location where RMAN was used to save logs. The location can be temporarily redirected as follows before recovering the database.
SQL> alter system set log_archive_dest_1='LOCATION=/rman/pancake/logship' scope=memory; System altered. SQL> recover standby database until cancel; ORA-00279: change 560224 generated at 05/24/2016 03:25:53 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_49_912576125.dbf ORA-00280: change 560224 for thread 1 is in sequence #49 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} AUTO ORA-00279: change 560353 generated at 05/24/2016 03:29:17 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_50_912576125.dbf ORA-00280: change 560353 for thread 1 is in sequence #50 ORA-00278: log file '/rman/pancake/logship/1_49_912576125.dbf' no longer needed for this recovery ... ORA-00279: change 560591 generated at 05/24/2016 03:33:56 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_54_912576125.dbf ORA-00280: change 560591 for thread 1 is in sequence #54 ORA-00278: log file '/rman/pancake/logship/1_53_912576125.dbf' no longer needed for this recovery ORA-00308: cannot open archived log '/rman/pancake/logship/1_54_912576125.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3
The final archive log reply reports an error, but this is normal. The error indicates that sqlplus was seeking a particular log file and did not find it. The reason is most likely that the log file does not yet exist.
If the source database can be shut down before copying archive logs, this step must be performed only once. The archive logs are copied and replayed, and then the process can continue directly to the cutover process that replicates the critical redo logs.
Incremental log replication and replay
In most cases, migration is not performed right away. It could be days or even weeks before the migration process is complete, which means that the logs must be continuously shipped to the replica database and replayed. Doing so makes sure that minimal data must be transferred and replayed when the cutover arrives.
This process can easily be scripted. For example, the following command can be scheduled on the original database to make sure that the location used for log shipping is continuously updated.
[oracle@jfsc1 pancake]$ cat copylogs.rman configure channel device type disk format '/rman/pancake/logship/%h_%e_%a.dbf'; backup as copy archivelog from time 'sysdate-2';
[oracle@jfsc1 pancake]$ rman target / cmdfile=copylogs.rman Recovery Manager: Release 12.1.0.2.0 - Production on Tue May 24 04:36:19 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. connected to target database: PANCAKE (DBID=3574534589) RMAN> configure channel device type disk format '/rman/pancake/logship/%h_%e_%a.dbf'; 2> backup as copy archivelog from time 'sysdate-2'; 3> 4> using target database control file instead of recovery catalog old RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/logship/%h_%e_%a.dbf'; new RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/logship/%h_%e_%a.dbf'; new RMAN configuration parameters are successfully stored Starting backup at 24-MAY-16 current log archived allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=369 device type=DISK channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=54 RECID=123 STAMP=912659482 RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/24/2016 04:36:22 ORA-19635: input and output file names are identical: /rman/pancake/logship/1_54_912576125.dbf continuing other job steps, job failed will not be re-run channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=41 RECID=124 STAMP=912659483 RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/24/2016 04:36:23 ORA-19635: input and output file names are identical: /rman/pancake/logship/1_41_912576125.dbf continuing other job steps, job failed will not be re-run ... channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=45 RECID=152 STAMP=912659514 RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/24/2016 04:36:55 ORA-19635: input and output file names are identical: /rman/pancake/logship/1_45_912576125.dbf continuing other job steps, job failed will not be re-run channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=47 RECID=153 STAMP=912659515 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/24/2016 04:36:57 ORA-19635: input and output file names are identical: /rman/pancake/logship/1_47_912576125.dbf Recovery Manager complete.
After the logs have been received, they must be replayed. Previous examples showed the use of sqlplus to manually run recover database until cancel
, which can be easily automated. The example shown here uses the script described in Replay Logs on Standby Database. The script accepts an argument that specifies the database requiring a replay operation. This process permits the same script to be used in a multidatabase migration effort.
[root@jfsc2 pancake]# ./replaylogs.pl PANCAKE ORACLE_SID = [oracle] ? The Oracle base has been set to /orabin SQL*Plus: Release 12.1.0.2.0 Production on Tue May 24 04:47:10 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> ORA-00279: change 560591 generated at 05/24/2016 03:33:56 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_54_912576125.dbf ORA-00280: change 560591 for thread 1 is in sequence #54 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} ORA-00279: change 562219 generated at 05/24/2016 04:15:08 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_55_912576125.dbf ORA-00280: change 562219 for thread 1 is in sequence #55 ORA-00278: log file '/rman/pancake/logship/1_54_912576125.dbf' no longer needed for this recovery ORA-00279: change 562370 generated at 05/24/2016 04:19:18 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_56_912576125.dbf ORA-00280: change 562370 for thread 1 is in sequence #56 ORA-00278: log file '/rman/pancake/logship/1_55_912576125.dbf' no longer needed for this recovery ... ORA-00279: change 563137 generated at 05/24/2016 04:36:20 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_65_912576125.dbf ORA-00280: change 563137 for thread 1 is in sequence #65 ORA-00278: log file '/rman/pancake/logship/1_64_912576125.dbf' no longer needed for this recovery ORA-00308: cannot open archived log '/rman/pancake/logship/1_65_912576125.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3 SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Cutover
When you are ready to cut over to the new environment, you must perform one final synchronization. When working with regular file systems, it is easy to make sure that the migrated database is 100% synchronized against the original because the original redo logs are copied and replayed. There is no good way to do this with ASM. Only the archive logs can be easily recopied. To make sure that no data is lost, the final shutdown of the original database must be performed carefully.
-
First, the database must be quiesced, ensuring that no changes are being made. This quiescing might include disabling scheduled operations, shutting down listeners, and/or shutting down applications.
-
After this step is taken, most DBAs create a dummy table to serve as a marker of the shutdown.
-
Force a log archiving to make sure that the creation of the dummy table is recorded within the archive logs. To do so, run the following commands:
SQL> create table cutovercheck as select * from dba_users; Table created. SQL> alter system archive log current; System altered. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down.
-
To copy the last of the archive logs, run the following commands. The database must be available but not open.
SQL> startup mount; ORACLE instance started. Total System Global Area 805306368 bytes Fixed Size 2929552 bytes Variable Size 331353200 bytes Database Buffers 465567744 bytes Redo Buffers 5455872 bytes Database mounted.
-
To copy the archive logs, run the following commands:
RMAN> configure channel device type disk format '/rman/pancake/logship/%h_%e_%a.dbf'; 2> backup as copy archivelog from time 'sysdate-2'; 3> 4> using target database control file instead of recovery catalog old RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/logship/%h_%e_%a.dbf'; new RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rman/pancake/logship/%h_%e_%a.dbf'; new RMAN configuration parameters are successfully stored Starting backup at 24-MAY-16 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=8 device type=DISK channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=54 RECID=123 STAMP=912659482 RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/24/2016 04:58:24 ORA-19635: input and output file names are identical: /rman/pancake/logship/1_54_912576125.dbf continuing other job steps, job failed will not be re-run ... channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=45 RECID=152 STAMP=912659514 RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/24/2016 04:58:58 ORA-19635: input and output file names are identical: /rman/pancake/logship/1_45_912576125.dbf continuing other job steps, job failed will not be re-run channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=47 RECID=153 STAMP=912659515 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/24/2016 04:59:00 ORA-19635: input and output file names are identical: /rman/pancake/logship/1_47_912576125.dbf
-
Finally, replay the remaining archive logs on the new server.
[root@jfsc2 pancake]# ./replaylogs.pl PANCAKE ORACLE_SID = [oracle] ? The Oracle base has been set to /orabin SQL*Plus: Release 12.1.0.2.0 Production on Tue May 24 05:00:53 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> ORA-00279: change 563137 generated at 05/24/2016 04:36:20 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_65_912576125.dbf ORA-00280: change 563137 for thread 1 is in sequence #65 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} ORA-00279: change 563629 generated at 05/24/2016 04:55:20 needed for thread 1 ORA-00289: suggestion : /rman/pancake/logship/1_66_912576125.dbf ORA-00280: change 563629 for thread 1 is in sequence #66 ORA-00278: log file '/rman/pancake/logship/1_65_912576125.dbf' no longer needed for this recovery ORA-00308: cannot open archived log '/rman/pancake/logship/1_66_912576125.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3 SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
-
At this stage, replicate all data. The database is ready to be converted from a standby database to an active operational database and then opened.
SQL> alter database activate standby database; Database altered. SQL> alter database open; Database altered.
-
Confirm the presence of the dummy table and then drop it.
SQL> desc cutovercheck Name Null? Type ----------------------------------------- -------- ---------------------------- USERNAME NOT NULL VARCHAR2(128) USER_ID NOT NULL NUMBER PASSWORD VARCHAR2(4000) ACCOUNT_STATUS NOT NULL VARCHAR2(32) LOCK_DATE DATE EXPIRY_DATE DATE DEFAULT_TABLESPACE NOT NULL VARCHAR2(30) TEMPORARY_TABLESPACE NOT NULL VARCHAR2(30) CREATED NOT NULL DATE PROFILE NOT NULL VARCHAR2(128) INITIAL_RSRC_CONSUMER_GROUP VARCHAR2(128) EXTERNAL_NAME VARCHAR2(4000) PASSWORD_VERSIONS VARCHAR2(12) EDITIONS_ENABLED VARCHAR2(1) AUTHENTICATION_TYPE VARCHAR2(8) PROXY_ONLY_CONNECT VARCHAR2(1) COMMON VARCHAR2(3) LAST_LOGIN TIMESTAMP(9) WITH TIME ZONE ORACLE_MAINTAINED VARCHAR2(1) SQL> drop table cutovercheck; Table dropped.
Nondisruptive redo log migration
There are times when a database is correctly organized overall with the exception of the redo logs. This can happen for many reasons, the most common of which is related to snapshots. Products such as SnapManager for Oracle, SnapCenter, and the NetApp Snap Creator storage management framework enable near- instantaneous recovery of a database, but only if you revert the state of the data file volumes. If redo logs share space with the data files, reversion cannot be performed safely because it would result in destruction of the redo logs, likely meaning data loss. Therefore, the redo logs must be relocated.
This procedure is simple and can be performed nondisruptively.
Current redo log configuration
-
Identify the number of redo log groups and their respective group numbers.
SQL> select group#||' '||member from v$logfile; GROUP#||''||MEMBER -------------------------------------------------------------------------------- 1 /redo0/NTAP/redo01a.log 1 /redo1/NTAP/redo01b.log 2 /redo0/NTAP/redo02a.log 2 /redo1/NTAP/redo02b.log 3 /redo0/NTAP/redo03a.log 3 /redo1/NTAP/redo03b.log rows selected.
-
Enter the size of the redo logs.
SQL> select group#||' '||bytes from v$log; GROUP#||''||BYTES -------------------------------------------------------------------------------- 1 524288000 2 524288000 3 524288000
Create new logs
-
For each redo log, create a new group with a matching size and number of members.
SQL> alter database add logfile ('/newredo0/redo01a.log', '/newredo1/redo01b.log') size 500M; Database altered. SQL> alter database add logfile ('/newredo0/redo02a.log', '/newredo1/redo02b.log') size 500M; Database altered. SQL> alter database add logfile ('/newredo0/redo03a.log', '/newredo1/redo03b.log') size 500M; Database altered. SQL>
-
Verify the new configuration.
SQL> select group#||' '||member from v$logfile; GROUP#||''||MEMBER -------------------------------------------------------------------------------- 1 /redo0/NTAP/redo01a.log 1 /redo1/NTAP/redo01b.log 2 /redo0/NTAP/redo02a.log 2 /redo1/NTAP/redo02b.log 3 /redo0/NTAP/redo03a.log 3 /redo1/NTAP/redo03b.log 4 /newredo0/redo01a.log 4 /newredo1/redo01b.log 5 /newredo0/redo02a.log 5 /newredo1/redo02b.log 6 /newredo0/redo03a.log 6 /newredo1/redo03b.log 12 rows selected.
Drop old logs
-
Drop the old logs (groups 1, 2, and 3).
SQL> alter database drop logfile group 1; Database altered. SQL> alter database drop logfile group 2; Database altered. SQL> alter database drop logfile group 3; Database altered.
-
If you encounter an error that prevents you from dropping an active log, force a switch to the next log to release the lock and force a global checkpoint. See the following example of this process. The attempt to drop logfile group 2, which was located on the old location, was denied because there was still active data in this logfile.
SQL> alter database drop logfile group 2; alter database drop logfile group 2 * ERROR at line 1: ORA-01623: log 2 is current log for instance NTAP (thread 1) - cannot drop ORA-00312: online log 2 thread 1: '/redo0/NTAP/redo02a.log' ORA-00312: online log 2 thread 1: '/redo1/NTAP/redo02b.log'
-
A log archiving followed by a checkpoint enables you to drop the logfile.
SQL> alter system archive log current; System altered. SQL> alter system checkpoint; System altered. SQL> alter database drop logfile group 2; Database altered.
-
Then delete the logs from the file system. You should perform this process with extreme care.