Oracle® Data Guard Concepts and Administration 10g Release 1 (10.1) Part Number B10823-01 |
|
|
View PDF |
This chapter describes configuring log transport services to transmit redo from the production database to one or more archival destinations. It contains the following topics:
Log transport services control the automated transfer of redo data from a production or primary database destination to another (standby) database destination. Log transport services also manage the process of resolving any gaps in the archived redo log files due to a network failure.
Log transport services can transmit redo data to local and remote destinations. Remote destinations can include any of the following types: physical and logical standby databases, archived redo log repositories, cross-instance archival database environments, Oracle Change Data Capture staging databases, and Oracle Streams downstream capture databases.
Figure 5-1 shows a simple Data Guard configuration with log transport services archiving redo data to a local destination on the primary database while also transmitting it to archived redo log files or standby redo log files at a remote standby database destination.
Text description of the illustration logtrans.gif
This section contains the following topics:
There are several types of destinations supported by log transport services:
Standby database destinations can be either physical standby databases or logical standby databases. Section 1.1.2 discusses standby databases.
This type of destination allows off-site archiving of redo data. An archive log repository is created by using a physical standby control file, starting the instance, and mounting the database. This database contains no datafiles and cannot be used for switchover or failover. This alternative is useful as a way of holding archived redo log files for a short period of time, perhaps a day, after which the log files can then be deleted. This avoids most of the storage and processing expense of another fully-configured standby database.
A cross-instance archival database environment is possible on the primary and standby databases. Within a Real Application Clusters environment, each instance transmits its redo data to a single instance of the cluster. This instance, known as the recovery instance, is typically the instance where recovery is performed. The recovery instance typically has a tape drive available for RMAN backup and restoration support. Cross-instance archival environments are described in Appendix B. Oracle High Availability Architecture and Best Practices provides additional information about RAC and Data Guard configurations.
This destination type allows Oracle Streams to configure the capture process remotely at a downstream database. The Streams downstream capture process uses log transport services to transfer redo data to the downstream database where a Streams capture process captures changes in the archived redo log files at the remote destination. See Oracle Streams Concepts and Administration for more information.
This destination type allows Change Data Capture for the Asynchronous AutoLog to use log transport services to transfer redo data from a source database to a remote staging database where a process captures change data from the archived redo log files. See Oracle Data Warehousing Guide for more information.
For discussion purposes, this guide refers to the production database as a primary database and to archival destinations as standby databases (as defined in Section 1.1). If you are using Oracle Change Data Capture, substitute the terms source and staging database for primary and standby database, respectively. If you are using Oracle Streams, substitute the terms source and downstream capture database for primary and standby database, respectively.
The LOG_ARCHIVE_DEST_
n initialization parameter defines up to ten (where n = 1, 2, 3, ... 10) destinations, each of which must specify either the LOCATION
or the SERVICE
attribute to specify where to archive the redo data. (Also, see Chapter 12 for complete information about all LOG_ARCHIVE_DEST_
n attributes.)
The LOCATION
and SERVICE
attributes describe either a local disk location or an Oracle Net service name that represents a standby destination to which log transport services will transmit redo data. Specifying remote destinations with the SERVICE
attribute ensures Data Guard can maintain a transactionally consistent remote copy of the primary database for disaster recovery.
For every LOG_ARCHIVE_DEST_
n initialization parameter that you define, you can specify a corresponding LOG_ARCHIVE_DEST_STATE_
n parameter. The LOG_ARCHIVE_DEST_STATE_
n (where n is an integer from 1 to 10) initialization parameter specifies whether the corresponding destination is currently on (enabled) or off (disabled). Table 5-1 describes the LOG_ARCHIVE_DEST_STATE_
n parameter attributes.
Example 5-1 provides an example of one destination with the LOCATION
attribute.
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/' LOG_ARCHIVE_DEST_STATE_1=ENABLE
Figure 5-2 shows what this simple configuration, consisting of a single local destination, would look like. The log writer process writes redo data to the online redo log file. As each online redo log file is filled, a log switch occurs and an ARCn process archives the filled online redo log file to an archived redo log file. The filled online redo log file is now available for reuse.
Text description of the illustration basicarch.gif
It is important to note that the configuration shown in Figure 5-2 does not include a standby database and thus does not provide disaster-recovery protection. To make this simple configuration into a basic Data Guard configuration that provides disaster recovery, you need to add a standby database at a remote destination by specifying the SERVICE
attribute.
Example 5-2 shows the initialization parameters that enable log transport services to archive the online redo log on the local destination chicago
and transmit redo data to a remote standby database with the Oracle Net service name boston
. The example takes the default values for all of the other LOG_ARCHIVE_DEST_
n attributes:
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_2='SERVICE=boston' LOG_ARCHIVE_DEST_STATE_2=ENABLE
These initialization parameters set up a basic Data Guard configuration that is based on the premises that log transport services will use archiver (ARCn) processes to archive to both the local and remote destinations, and the configuration provides the maximum performance level of data protection.
Although you can create a basic Data Guard configuration by specifying only the LOCATION
or the SERVICE
attributes on the LOG_ARCHIVE_DEST_
n parameter, you can optionally specify more attributes to further define each destination's behavior. The following sections describe several of the LOG_ARCHIVE_DEST_
n parameter attributes.
The Oracle database enables you to configure a disk area called the flash recovery area that is a directory, file system, or Oracle Storage Manager disk group that serves as the default storage area for files related to recovery.
To configure a flash recovery area, you specify the directory, file system, or Oracle Storage Manager disk group that will serve as the flash recovery area using the DB_RECOVERY_FILE_DEST
initialization parameter. If no local destinations are defined, Data Guard implicitly uses the LOG_ARCHIVE_DEST_10
destination to refer to the default disk location for the flash recovery area and for storing the archived redo log files. (See Oracle Database Backup and Recovery Basics to configure the flash recovery area and Oracle Database Administrator's Guide for more information about Oracle Storage Manager and Oracle Managed Files.)
Although the flash recovery area uses the LOG_ARCHIVE_DEST_10
destination by default, you can explicitly set up flash recovery areas to use one or more other LOG_ARCHIVE_DEST_
n destinations or the STANDBY_ARCHIVE_DEST
destination. This section contains the following topics:
See Oracle Database Backup and Recovery Basics to configure flash recovery areas and Section 8.4.4 for information about setting up a deletion policy for archived redo log files in flash recovery areas.
If a flash recovery area has been configured and no local destinations are defined, Data Guard implicitly uses the LOG_ARCHIVE_DEST_10
destination to refer to the default disk location for the flash recovery area and for storing the archived redo log files.
When the LOG_ARCHIVE_DEST_10
destination is used for the flash recovery area, Data Guard automatically uses the default values for all of the LOG_ARCHIVE_DEST_
10 parameter attributes. To override the defaults, you can dynamically set the values for mostFoot 1 of the attributes by explicitly specifying the LOG_ARCHIVE_DEST_
10 parameter. For example, the following ALTER SYSTEM SET
statement specifies several attributes on the LOG_ARCHIVE_DEST_
10 initialization parameter:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_10='LOCATION=USE_DB_RECOVERY_FILE_DEST LGWR MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'
When setting LOG_ARCHIVE_DEST_
n attributes, the TEMPLATE
attribute of a LOG_ARCHIVE_DEST_
n parameter will override all other specifications for the flash recovery area. If the TEMPLATE
attribute is specified for a remote destination and that destination archives redo data to a flash recovery area, the archived redo log file will use the directory and file name specified by the TEMPLATE
attribute.
By default, if no local destinations are defined, flash recovery areas use the LOG_ARCHIVE_DEST_10
destination, but you can explicitly set up one or more other LOG_ARCHIVE_DEST_
n destinations. For example, you can optionally:
LOG_ARCHIVE_DEST_10
For example, an existing Data Guard configuration may have already used the LOG_ARCHIVE_DEST_
10 destination for another purpose, or you may want to release the LOG_ARCHIVE_DEST_
10 destination for other uses.
To configure the flash recovery area to use another archival destination, you must specify the LOCATION=USE_DB_RECOVERY_FILE_DEST
attribute to define the new destination. For example:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'
The implicit setting (for LOG_ARCHIVE_DEST_
10 to use the flash recovery area) will be cleared.
LOG_ARCHIVE_DEST_
10 destination for use after a role transition
For example, you can configure one destination to be valid for standby redo log archival when the database operates in the standby role and another destination to be valid for online redo log archival when the database operates in the primary role.
To configure a LOG_ARCHIVE_DEST_
n destination in addition to LOG_ARCHIVE_DEST_10
, you must explicitly specify both destinations:
LOG_ARCHIVE_DEST_9='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)' LOG_ARCHIVE_DEST_10='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'
You can use a flash recovery area on a physical standby database by defining the STANDBY_ARCHIVE_DEST
parameter. For example:
STANDBY_ARCHIVE_DEST='LOCATION=USE_DB_RECOVERY_FILE_DEST'
Note: Flash recovery area destinations specified with the |
You can share a flash recovery area between databases provided each database that shares the flash recovery area has a unique database name, specified with the DB_UNIQUE_NAME
initialization parameter.
The following examples show how to specify initialization parameters on the primary and standby databases that will share a flash recovery area in the /arch/oradata
location. Although the DB_UNIQUE_NAME
parameter is not specified in Example 5-3, it defaults to PAYROLL
, which is the name specified for the DB_NAME
initialization parameter.
DB_NAME=PAYROLL LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' DB_RECOVERY_FILE_DEST='/arch/oradata' DB_RECOVERY_FILE_DEST_SIZE=20G
DB_NAME=PAYROLL DB_UNIQUE_NAME=boston LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' STANDBY_ARCHIVE_DEST='LOCATION=USE_DB_RECOVERY_FILE_DEST' DB_RECOVERY_FILE_DEST='/arch/oradata' DB_RECOVERY_FILE_DEST_SIZE=5G
See Oracle Database Backup and Recovery Advanced User's Guide for more information about sharing a flash recovery area among multiple databases.
This section contains the following topics:
By default, log transport services use ARCn processes to archive the local online redo log files on the primary database before transmitting the redo data to remote standby destinations. Using ARCn processes for archival processing is described in the following topics:
ARCn archival processing supports only the maximum performance level of data protection in Data Guard configurations. You must use the LGWR process to transmit redo data to standby locations that operate in other data protection modes. See Section 5.6 for more information about the Data Guard data protection modes.
The LOG_ARCHIVE_LOCAL_FIRST
initialization parameter, the ARCH
attribute on the LOG_ARCHIVE_DEST_
n parameter, and the LOG_ARCHIVE_DEST_STATE_
n parameter control ARCn archival processing. The following sections describe setting these parameters to control archival processing.
The ARCH
attribute of the LOG_ARCHIVE_DEST_
n parameter enables log transport services to use ARCn processes to transmit redo data to archival destinations:
ARCH
and LOCATION
attributes on the LOG_ARCHIVE_DEST_
n parameter, ARCn processes archive to a local destination.ARCH
and SERVICE
attributes on the LOG_ARCHIVE_DEST_
n parameter, ARCn processes transmit redo data to a remote destination.The LOG_ARCHIVE_LOCAL_FIRST
initialization parameter controls when the archiver processes (ARCn) transmit redo data to remote standby database destinations. The following table describes possible values for this parameter.
Value | Transmits Redo Data to the Remote Standby Destination. . . |
---|---|
TRUE |
After the online redo log file is completely and successfully archived to at least one local destination. This is the default value. Section 5.3.1.2 provides more information about this default ARCn behavior. |
FALSE |
At the same time the online redo log file is archived to the local destinations. Section 5.3.1.3 provides more information about this ARCn behavior. |
The following sections provide more information about the behavior of ARCn processing depending on the value of the LOG_ARCHIVE_LOCAL_FIRST
initialization parameter.
Figure 5-3 shows an example of the default archival processing in a Data Guard configuration. This configuration represents the default ARCn archival processing in a Data Guard configuration with a primary database located in Chicago and one physical standby database located in Boston. (This is the configuration that was created in Chapter 3.)
Archiving happens when a log switch occurs on the primary database. After the ARC0
process successfully archives the local online redo log to the local destination (LOG_ARCHIVE_DEST_1
), the ARC1
process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2
). On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. (Section 5.6.2 describes how to configure standby redo log files.) Log apply services use Redo Apply (MRP processFoot 2) or SQL Apply (LSP processFoot 3) to apply the redo to the standby database. Because the online redo log files are archived locally first, the LGWR process reuses the online redo log files much earlier than would be possible if the ARCn processes archived to the standby database concurrently with the local destination. This behavior is useful when archiving to remote destinations that use a slow network connection, such as a long-distance wide area network (WAN). A benefit of the default ARCn archival behavior is that local archiving, and hence, processing on the primary database, is not affected by archiving to non-mandatory, remote destinations. It may be necessary to create more online redo log files, because it may take more time to recycle the online redo log files for reuse by the log writer process.
As shown in Figure 5-3, you need to have at least 2 ARCn processes to separate local archival from remote archival. This can be done by setting the LOG_ARCHIVE_MAX_PROCESSES
initialization parameter (the default setting is 2).
Text description of the illustration default_archiving.gif
Because the default ARCn archival processing disassociates local archiving from remote archiving, sites that may have policies to delete archived redo log files on the primary database immediately after backing them up must make sure that the standby destinations receive the corresponding redo data before deleting the archived redo log files on the primary database. You can query the V$ARCHIVED_LOG
view to verify the redo data was received on standby destinations.
To transmit redo data to the standby destination at the same time the online redo log file is being archived to the local online redo log files, set the LOG_ARCHIVE_LOCAL_FIRST=FALSE
initialization parameter.
Note: Prior to release 10.1, the default ARCn archival behavior was to transmit redo data to the standby destination at the same time the online redo log file was being archived. |
Example 5-5 shows the portion of the primary role initialization parameters with LOG_ARCHIVE_LOCAL_FIRST=FALSE
. Note that specifying the ARCH
attribute on the LOG_ARCHIVE_DEST_
n parameter is optional, because this is the default archival setting.
LOG_ARCHIVE_LOCAL_FIRST=FALSE LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ LOG_ARCHIVE_DEST_2='SERVICE=boston ARCH LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE
Figure 5-4 shows archival processing in a Data Guard configuration in which ARCn processes on the primary database transmit redo data to the remote destination at the same time the local online redo log file is being archived. In this configuration, archival operations occur on both the local and the remote standby destinations using redo data from the local online redo log files. This results in redo data being promptly dispatched to the remote standby database destination.
Specifying LOG_ARCHIVE_LOCAL_FIRST=FALSE
is most useful for faster network connections, such as high-speed local area networks (LAN).
Text description of the illustration archarch.gif
If you choose the LGWR process, it will transmit redo data to both the local and remote destinations as the redo is generated on the primary database. This section contains the following topics:
Specifying the LGWR
and SYNC
attributes and configuring standby redo log files on at least one destination in a Data Guard configuration are required prerequisites for the maximum protection and maximum availability data protection modes. See Section 5.6 for information about the Data Guard data protection modes.
You can optionally enable log transport services to use the LGWR process to concurrently transmit redo data to remote destinations at the same time the redo is written to the local online redo log files.
Using the LGWR process differs from the default ARCn processing (described in Section 5.3.1), because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process creates a new redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log of the primary database. Then, as redo is generated at the primary database, it is also propagated to the remote destination. The propagation to the remote destination will either be synchronous or asynchronous, based on whether the SYNC
or the ASYNC
attribute is set on the LOG_ARCHIVE_DEST_
n parameter. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations.
The following sections describe the LGWR
, SYNC
, and ASYNC
attributes.
The LGWR
attribute of the LOG_ARCHIVE_DEST_
n parameter enables log transport services to use the LGWR process to transmit redo data to archival destinations You can specify the LGWR
and SERVICE
attributes on the LOG_ARCHIVE_DEST_
n parameter to transmit redo data to a remote standby destination.
By default, the LGWR process synchronously archives to the local online redo log files at the same time it transmits redo data to the remote destination. This is equivalent to specifying the LGWR
and SYNC
attributes on the LOG_ARCHIVE_DEST_
n parameter:
If you need to transmit redo data to multiple remote destinations, you can optionally specify SYNC=PARALLEL
to initiate the network I/O to multiple destinations in parallel. When you specify both the LGWR
and SYNC=PARALLEL
attributes on the LOG_ARCHIVE_DEST_
n parameter, the LGWR process submits the redo data to one or more network server (LNSn) processes, which then initiate the network I/O in parallel.
If you do not specify either the SYNC
or ASYNC
attributes, the default is SYNC=PARALLEL
.
ASYNC
attribute performs all network I/O asynchronously and control is returned to the executing application or user immediately. When this attribute is specified, the LGWR process archives to the local online redo log file and submits the network I/O request to the network server (LNSn) process for that destination, and the LGWR process continues processing the next request without waiting for the network I/O to complete.
If you specify the ASYNC
attribute, you can specify a block count to determine the size of the SGA network buffer to be used. Block counts from 0 to 102,400 blocks are allowed. The ASYNC
attribute allows the optional suffix value K to represent 1,000 (the value 1K indicates 1,000 512-byte blocks). In general, for slower network connections, use larger block counts. Section 5.3.2.3 shows an example of asynchronous network transmission in a Data Guard configuration.
When the LGWR
and ASYNC
attributes are in effect, the LGWR process archives to the local online redo log file and submits the redo data to one or more LNSn processes that asynchronously transmit the redo data over the network. If log transport services transmit redo data to multiple remote destinations, the LNSn processes (one for each destination) initiate the network I/O to all of the destinations in parallel. See Chapter 12 for more information.
Note: If you configure a destination to use the LGWR process, but for some reason the LGWR process becomes unable to archive to the destination, then log transport services will revert to using the ARCn process to complete archival operations using the default ( |
Example 5-6 shows the primary role LOG_ARCHIVE_DEST_
n parameters that configure the LGWR process for synchronous network transmission. Note that specifying the SYNC
attribute on the LOG_ARCHIVE_DEST_
n parameter is optional, because synchronous network transmission is the default for LGWR archival processing. Also, the example specifies the NET_TIMEOUT=30
attribute to control the amount of time that the LGWR process waits for status from the network server process before terminating the network connection. If there is no reply within 30 seconds, then the LGWR process returns an error message.
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR SYNC NET_TIMEOUT=30' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE
Figure 5-5 shows a Data Guard configuration that uses the LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database. On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.
A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, log apply services use Redo Apply (MRP process) or SQL Apply (LSP process) to apply the redo data to the standby database.
If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.
Text description of the illustration lgwrarch.gif
Example 5-7 shows the primary role LOG_ARCHIVE_DEST_
n
parameters that configure the LGWR process for asynchronous network transmission.
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR ASYNC=61440' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE
Figure 5-6 shows the LNSn process transmitting redo data over Oracle Net to the RFS process on the standby database. The LNSn and LGWR processes on the primary database use interprocess communication (IPC) to communicate.
Text description of the illustration lnsprocess.gif
Providing a secure environment should be a core requirement for any site supporting mission-critical applications, because a lack of security can directly affect availability. Data Guard provides a secure environment and prevents the possible tampering of redo data as it is being transferred to the standby database.
Log transport services use authenticated network sessions to transfer redo data. These sessions are authenticated using the SYS user password contained in the password file. All databases in the Data Guard configuration must use a password file, and the SYS password contained in this password file must be identical on all systems. This authentication can be performed even if Oracle Advanced Security is not installed, and provides some level of security when shipping redo.
Note: To further protect redo (for example, to encrypt redo or compute an integrity checksum value for redo traffic over the network to disallow redo tampering on the network), Oracle recommends that you install and use Oracle Advanced Security. See the Oracle Advanced Security Administrator's Guide. |
To provide for secure redo transmission, you need to set up every database in the Data Guard configuration to use a password file, and set the password for the SYS user identically on every system. To set up a secure environment perform the following steps on the primary database and each standby database:
orapwd
utility) on the primary and all standby databases. For example:
ORAPWD FILE=orapw PASSWORD=mypassword ENTRIES=10
This example creates a password file with 10 entries, where the password for SYS is mypassword. For redo data transmission to succeed, ensure you set the password for the SYS user account identically for every primary and standby database.
REMOTE_LOGIN_PASSWORDFILE
initialization parameter to EXCLUSIVE
or SHARED
to enable Oracle to check for a password file and to specify how many databases can use the password file. For example:
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
See the Oracle Database Reference for more information about this parameter.
Once you have performed these steps to set up security on every database in the Data Guard configuration, Data Guard transmits redo data only after the appropriate authentication checks using SYS credentials are successful.
This section contains the following topics:
The VALID_FOR
attribute enables you to configure destination attributes for both the primary and standby database roles in one server parameter file (SPFILE), so that your Data Guard configuration operates properly after a role transition. This simplifies switchovers and failovers by removing the need to enable and disable the role-specific parameter files after a role transition.
When you specify the VALID_FOR
attribute of the LOG_ARCHIVE_DEST_
n parameter, it identifies when log transport services can transmit redo data to destinations based on the following factors:
To configure these factors for each LOG_ARCHIVE_DEST_
n destination, you specify this attribute with a pair of keywords: VALID_FOR=(
redo_log_type,
database_role)
. The redo_log_type keyword identifies the destination as valid for archiving the following: ONLINE_LOGFILE
, STANDBY_LOGFILE
, or ALL_LOGFILES
. The database_role keyword identifies the role in which the current database must be in for the destination to be valid: PRIMARY_ROLE
, STANDBY_ROLE
, or ALL_ROLES
.
If you do not specify the VALID_FOR
attribute for a destination, by default, archiving the online redo log and standby redo log is enabled to the destination, regardless of the database role. This default behavior is equivalent to setting the (ALL_LOGFILES,ALL_ROLES)
keyword pair on the VALID_FOR
attribute. For example:
LOG_ARCHIVE_DEST_1='LOCATION=/ARCH1/CHICAGO/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES)'
Although the (ALL_LOGFILES,ALL_ROLES)
keyword pair is the default, it is not recommended for every destination. For example, logical standby databases, unlike physical standby databases, are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). In most cases, the online redo log files generated by the logical standby database are located in the same directory as the standby redo logs files that are receiving redo from the primary database.
Therefore, it is recommended that you define a VALID_FOR
attribute for each destination so that your Data Guard configuration operates properly, including after a role transition. See the scenarios in Section 10.1 for examples of the VALID_FOR
attribute settings for various Data Guard configurations, and Chapter 12 for reference information about the VALID_FOR attribute.
If you choose not to use the VALID_FOR
attribute to configure destinations, you must maintain two database server parameter files (SPFILEs) for each database: one for when the database is in the primary role and the other for the standby role. See Chapter 10 for more configuration examples.
The DB_UNIQUE_NAME
attribute enables you to specify unique database names when you configure destinations. This makes it possible to dynamically add a standby database to a Data Guard configuration that contains a Real Applications Clusters primary database, when that primary database is operating in either the maximum protection or the maximum availability level of protection.
Together, the DB_UNIQUE_NAME
attribute of the LOG_ARCHIVE_DEST_
n parameter and the DG_CONFIG
attribute of the LOG_ARCHIVE_CONFIG
parameter specify the unique name of each database of the Data Guard configuration. The names you supply must match what was defined for each database with the DB_UNIQUE_NAME
initialization parameter.
For example, the following initialization parameters show the DB_UNIQUE_NAME
and LOG_ARCHIVE_CONFIG
definitions for the primary database (chicago
) in the Data Guard configuration described in Chapter 3:
DB_NAME=chicago DB_UNIQUE_NAME=chicago LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago, boston)' LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) LOG_ARCHIVE_DEST_2= 'SERVICE=boston VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston'
The DB_UNIQUE_NAME
attribute is required for remote destinations specified with the SERVICE
attribute. In the example, the LOG_ARCHIVE_DEST_2
parameter specifies the DB_UNIQUE_NAME=boston
for the remote destination; log transport services validate this information at the remote destination. If the names do not match, the connection to that destination is refused.
The LOG_ARCHIVE_CONFIG
parameter also has SEND
, NOSEND
, RECEIVE
, and NORECEIVE
attributes:
SEND
enables a database to send redo data to remote destinationsRECEIVE
enables the standby database to receive redo from another databaseTo disable these settings, use the NOSEND
and NORECEIVE
keywords.
For example, to ensure the primary database never accidentally receives any archived redo data, set the LOG_ARCHIVE_CONFIG
initialization parameter to NORECEIVE
on the primary database, as follows:
LOG_ARCHIVE_CONFIG='NORECEIVE,DG_CONFIG=(chicago,boston)'
However, keep in mind that specifying either the NOSEND
or the NORECEIVE
attributes may limit the database instance's capabilities after a role transition. For example, if a standby database with the NOSEND
attribute set is transitioned to the primary role, it would not be able to transmit redo data to other standby databases until you reset the parameter value to SEND
. Similarly, a database that has the NORECEIVE
attribute specified cannot receive redo from the primary database.
By default, the LOG_ARCHIVE_CONFIG
parameter allows the primary database to send redo data to the standby database and allows the standby database to receive redo from the primary database for archiving. This is equivalent to setting both SEND
and RECEIVE
attributes on the LOG_ARCHIVE_CONFIG
parameter.
To handle archiving failures, you can use the REOPEN
and MAX_FAILURES
attributes of the LOG_ARCHIVE_DEST_
n parameter to specify what actions are to be taken when archival processing to a destination fails. These actions include:
Use the REOPEN
attribute to determine if and when the ARCn process or the LGWR process attempts to transmit redo data again to a failed destination following an error.
Use the REOPEN=
seconds attribute to specify the minimum number of seconds that must elapse following an error before the archiving process will try again to access a failed destination. The default value is 300 seconds. The value set for the REOPEN
attribute applies to all errors, not just connection failures. You can turn off the option by specifying NOREOPEN
, which prevents the destination from being retried after a failure occurs.
Use the MAX_FAILURE
attribute to specify the maximum number of consecutive times that log transport services attempt to transmit redo data to a failed destination. You can use the REOPEN
attribute, in conjunction with the MAX_FAILURE
attribute, to limit the number of consecutive attempts that will be made to reestablish communication with a failed destination. Once the specified number of consecutive attempts is exceeded, the destination is treated as if the NOREOPEN
attribute was specified.
The REOPEN
attribute is required when you use the MAX_FAILURE
attribute. Example 5-8 shows how to set a retry time of 60 seconds and limit retries to 3 attempts.
LOG_ARCHIVE_DEST_1='LOCATION=/arc_dest REOPEN=60 MAX_FAILURE=3'
Data Guard provides three modes of data protection: maximum protection, maximum availability, and maximum performance. The level of data protection you choose controls what happens if the primary database loses its connection to the standby database. This section contains the following topics:
To determine the appropriate data protection mode to use, review the following descriptions of the data protection modes to help assess your business requirements for data availability against user demands for response time and performance. Also, see Section 5.6.3 for information about setting up the data protection mode.
This protection mode guarantees that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover each transaction must be written to both the local online redo log and to the standby redo log on at least one standby database before the transaction commits. To ensure data loss cannot occur, the primary database shuts down if a fault prevents it from writing its redo stream to at least one remote standby redo log. For multiple-instance RAC databases, Data Guard shuts down the primary database if it is unable to write the redo records to at least one properly configured database instance. The maximum protection mode requires that you:
SYNC
, LGWR
, and AFFIRM
attributes of the LOG_ARCHIVE_DEST_
n parameter for at least 1 standby database destination.This protection mode provides the highest level of data protection that is possible without compromising the availability of the primary database. Like maximum protection mode, a transaction will not commit until the redo needed to recover that transaction is written to the local online redo log and to at least one remote standby redo log. Unlike maximum protection mode, the primary database does not shut down if a fault prevents it from writing its redo stream to a remote standby redo log. Instead, the primary database operates in maximum performance mode until the fault is corrected and all gaps in redo log files are resolved. When all gaps are resolved, the primary database automatically resumes operating in maximum availability mode.
This mode guarantees that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Like maximum protection mode, the maximum availability mode requires that you:
SYNC
, LGWR
, and AFFIRM
attributes of the LOG_ARCHIVE_DEST_
n parameter for at least 1 standby database.This protection mode (the default) provides the highest level of data protection that is possible without affecting the performance of the primary database. This is accomplished by allowing a transaction to commit as soon as the redo data needed to recover that transaction is written to the local online redo log. The primary database's redo data stream is also written to at least one standby database, but that redo stream is written asynchronously with respect to the commitment of the transactions that create the redo data.
When network links with sufficient bandwidth are used, this mode provides a level of data protection that approaches that of maximum availability mode with minimal impact on primary database performance.
The maximum performance mode enables you to either set the LGWR
and ASYNC
attributes, or set the ARCH
attribute on the LOG_ARCHIVE_DEST_
n parameter for the standby database destination. If the primary database fails, you can reduce the amount of data that is not received on the standby destination by setting the LGWR
and ASYNC
attributes.
Standby redo log files are required for the maximum protection and maximum availability modes and highly recommended on all standby databases, because Data Guard can recover and apply more redo data from standby redo log files than from the archived redo log files alone.
You should plan the standby redo log configuration and create all required groups and members of groups either before or soon after you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.
Use the following steps to configure multiplexed standby redo log files:
The size of the current standby redo log file must exactly match (or be larger than) the size of the current primary database online redo log file. For example, if the primary database uses two online redo log groups whose log files are 200K, then the standby redo log groups should also have log file sizes of 200K.
Minimally, the configuration should have one more standby redo log file group than the number of online redo log file groups on the primary database. However, the recommended number of standby redo log file groups is dependent on the number of threads on the primary database. Use the following equation to determine an appropriate number of standby redo log file groups:
(maximum number of logfiles for each thread + 1) * maximum number of threads
Using this equation reduces the likelihood that the primary instance's log writer (LGWR) process will be blocked because a standby redo log file cannot be allocated on the standby database. For example, if the primary database has 2 log files for each thread and 2 threads, then 6 standby redo log file groups are needed on the standby database.
Note:
Logical standby databases may require more standby redo log files (or additional ARCn processes) depending on the workload. This is because logical standby databases also write to online redo log files, which take precedence over standby redo log files. Thus, the standby redo log files may not be archived as quickly as the online redo log files. Also, see Section 5.7.3.1. |
Verify the values already set for the MAXLOGFILES
and MAXLOGMEMBERS
clauses on the SQL CREATE DATABASE
statement will not limit the number of standby redo log file groups and number of members in each group that you can add. The only way to override the limits specified by the MAXLOGFILES
and MAXLOGMEMBERS
clauses is to re-create the primary database or control file.
See Oracle Database SQL Reference and your operating system specific Oracle documentation for the default and legal values of the MAXLOGFILES
and MAXLOGMEMBERS
clauses.
To create new standby redo log file groups and members, you must have the ALTER DATABASE
system privilege. The standby database begins using the newly created standby redo log files the next time there is a log switch on the primary database. Examples 5-9 and 5-10 show how to create a new group of standby redo log files, use the ALTER DATABASE
statement with the A
DD STANDBY LOGFILE GROUP
clause.
The following statement adds a new group of standby redo log files to a standby database and assigns them to THREAD 5
:
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 5 2> ('/oracle/dbs/log1c.rdo','/oracle/dbs/log2c.rdo') SIZE 500M;
The THREAD
clause is required only if you want to add one or more standby redo log file groups to a specific primary database thread. If you do not include the THREAD
clause and the configuration uses Real Application Clusters (RAC), Data Guard will automatically assign standby redo log file groups to threads at runtime as they are needed by the various RAC instances.
You can also specify a number that identifies the group using the GROUP
clause:
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 10 2> ('/oracle/dbs/log1c.rdo','/oracle/dbs/log2c.rdo') SIZE 500M;
Using group numbers can make administering standby redo log file groups easier. However, the group number must be between 1 and the value of the MAXLOGFILES
clause. Do not skip log file group numbers (that is, do not number groups 10, 20, 30, and so on), or you will use additional space in the standby database control file.
To verify the standby redo log file groups are created and running correctly, invoke a log switch on the primary database, and then query either the V$STANDBY_LOG
view or the V$LOGFILE
view on the standby database. For example:
SQL> SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM V$STANDBY_LOG; GROUP# THREAD# SEQUENCE# ARC STATUS ---------- ---------- ---------- --- ---------- 3 1 16 NO ACTIVE 4 0 0 YES UNASSIGNED 5 0 0 YES UNASSIGNED
To set up log transport services and specify a level of data protection for the Data Guard configuration, perform the following steps.
On the primary database, configure the LOG_ARCHIVE_DEST_
n parameter attributes appropriately. Each of the Data Guard data protection modes requires that at least one standby database in the configuration meet the minimum set of requirements listed in Table 5-2.
Note: Oracle recommends that a Data Guard configuration that is running in maximum protection mode contains at least two standby databases that meet the requirements listed in Table 5-2. That way, the primary database can continue processing if one of the standby databases cannot receive redo data from the primary database. |
The following example shows how to configure the maximum availability mode:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago 2> OPTIONAL LGWR SYNC AFFIRM 3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 4> DB_UNIQUE_NAME=chicago';
If they are not already specified in the SPFILE, you should also specify unique names with the DB_UNIQUE_NAME
initialization parameter and list all databases on the LOG_ARCHIVE_CONFIG
parameter with the DG_CONFIG
attribute. For example:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)' This will enable the dynamic addition of a standby database to a Data Guard configuration that has a Real Application Clusters primary database running in either maximum protection or maximum availability mode.
Perform this step only if you are upgrading the protection mode (for example, from maximum performance mode to maximum availability mode). Otherwise, go to Step 3.
Assume this example is upgrading the Data Guard configuration from the maximum performance mode to the maximum availability mode. Shut down the primary database and restart it in mounted mode:
SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT;
For a Real Application Clusters database, shut down all of the primary instances but start and mount only one primary instance.
To specify a data protection mode, issue the SQL ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE {PROTECTION | AVAILABILITY | PERFORMANCE}
statement on the primary database. For example, the following statement specifies the maximum availability mode:
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;
If you performed Step 2 to upgrade the protection mode, open the database:
SQL> ALTER DATABASE OPEN;
If you are downgrading the protection mode, the database will already be open.
On the standby databases, configure the LOG_ARCHIVE_DEST_
n parameter attributes so the configuration can continue to operate in the new protection mode after a switchover. For example:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston 2> OPTIONAL LGWR SYNC AFFIRM 3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 4> DB_UNIQUE_NAME=boston';
Query the V$DATABASE
view to confirm the Data Guard configuration is operating in the new protection mode. For example:
SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL FROM V$DATABASE; PROTECTION_MODE PROTECTION_LEVEL --------------------- --------------------- MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY
See Chapter 13 and Oracle Database SQL Reference for information about SQL statements.
This section contains the following topics:
Typically, when redo data is received from the primary database, the redo data is written to archived redo log files that are stored in the directory you specify with the LOCATION
attribute of the LOG_ARCHIVE_DEST_
n parameter. Alternatively, you can specify the STANDBY_ARCHIVE_DEST
initialization parameter on the standby database to indicate an alternate directory where the archived redo log files are to be stored when received from the primary database.
If both parameters are specified, the STANDBY_ARCHIVE_DEST
initialization parameter overrides the directory location specified with the LOG_ARCHIVE_DEST_
n parameter.
The location where archived redo log files are stored on the standby database is determined according to the following list of rules. When the database instance is started, the archived redo log files are evaluated in the list order:
STANDBY_ARCHIVE_DEST
initialization parameter is specified on the standby database, that location is used.LOG_ARCHIVE_DEST_
n parameter contains the VALID_FOR=(STANDBY_LOGFILE,*)
attribute, then the location specified for this destination is used.COMPATIBLE
parameter is set to 10.0 or greater and none of the LOG_ARCHIVE_DEST_
n parameters contain the VALID_FOR=(STANDBY_LOGFILE,*)
attribute, then an arbitrary LOG_ARCHIVE_DEST_
n parameter that is valid for the destination is used.STANDBY_ARCHIVE_DEST
initialization parameter.
To see the implicit default value of the STANDBY_ARCHIVE_DEST
initialization parameter, query the V$ARCHIVE_DEST
view:
SQL> SELECT DEST_NAME, DESTINATION FROM V$ARCHIVE_DEST 2> WHERE DEST_NAME='STANDBY_ARCHIVE_DEST'; DEST_NAME ---------------------------------------------------------------------------- -------------------------------------------------------- DESTINATION ---------------------------------------------------------------------------- -------------------------------------------------------- STANDBY_ARCHIVE_DEST /oracle/dbs/arch
Log transport services use the value specified with the STANDBY_ARCHIVE_DEST
initialization parameter in conjunction with the LOG_ARCHIVE_FORMAT
parameter to generate the filenames for the archived redo log files or standby redo log files on the standby site. For example:
STANDBY_ARCHIVE_DEST='/arc_dest/arls' LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
In the example, %s corresponds to the sequence number, and %r corresponds to the resetlogs ID. Together, these ensure unique names are constructed for the archived redo log files across multiple incarnations of the database. The %t, which is required for Real Application Clusters configurations, corresponds to the thread number.
For a physical standby database, log transport services store the fully qualified filenames in the standby database control file, and log apply services use this information to perform recovery on the standby database.
Note: If you have specified the |
To display the list of archived redo log files that are on the standby system, query the V
$ARCHIVED_LOG
view on the standby database:
SQL> SELECT NAME FROM V$ARCHIVED_LOG; NAME -------------------------------------------------------------------------------- /arc_dest/log_1_771.arc /arc_dest/log_1_772.arc /arc_dest/log_1_773.arc /arc_dest/log_1_774.arc /arc_dest/log_1_775.arc
You can specify a policy for reusing the online redo log file by setting the OPTIONAL
or MANDATORY
attribute of the LOG_ARCHIVE_DEST_
n parameter. By default, remote destinations are set to OPTIONAL
. The archival operation of an optional destination can fail, and the online redo log file can be reused even though transmitting the redo data and writing the log contents was not successful. If the archival operation of a mandatory destination fails, online redo log files cannot be overwritten until the failed archive is completed to the mandatory destination.
By default, one local destination is mandatory even if you designate all destinations to be optional.
Example 5-11 shows how to set a mandatory local archiving destination and enable that destination. When specifying the MANDATORY
attribute, also consider specifying the REOPEN
and MAX_FAILURE
attributes as described in Section 5.5 to handle failure conditions.
LOG_ARCHIVE_DEST_3 = 'LOCATION=/arc_dest MANDATORY'
This section contains the following topics:
The easiest way to verify the standby redo log has an appropriate number of log file groups is to examine the RFS process trace file and database alert log. If either log contains messages that indicate the RFS process frequently has to wait for a group because archiving did not complete, then add more log file groups to the standby redo log. The additional standby redo log file groups give the archival operation time to complete before the standby redo log file is reused by the RFS process.
In some cases, it might not be necessary to create a complete group of standby redo log files. A group could already exist, but may not be complete because one or more members were dropped (for example, because of disk failure). In this case, you can add new members to an existing group.
To add new members to a standby redo log file group, use the ALTER DATABASE
statement with the ADD STANDBY LOGFILE MEMBER
clause. The following statement adds a new member to the standby redo log file group number 2:
SQL> ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/disk1/oracle/dbs/log2b.rdo' 2> TO GROUP 2;
Use fully qualified filenames of new members to indicate where the file should be created. Otherwise, files will be created in either the default or current directory of the database, depending on your operating system.
If you used the THREAD
clause to pre-assign a standby redo log group to a specific thread and later need to reassign the thread, first drop the standby redo log group (using the DROP LOGFILE
clause) and add it again using the ALTER DATABASE ADD STANDBY LOGFILE THREAD
n
statement.
This section describes:
As archived redo log files are generated and RMAN backups are made, Oracle adds new records to the reusable section of the control file. If no records are available for reuse (because all records are still within the number of days specified by CONTROL_FILE_RECORD_KEEP_TIME
), then the control file is expanded and new records are added to the control file.
The maximum control file size is 20000 database blocks. If DB_BLOCK_SIZE
equals 8192, then the maximum control file size is 156 MB. If the control files are stored in pre-created volumes, then the volumes that contain the primary and standby control files should be sized to accommodate a control file of maximum size. If the control file volume is too small and cannot be extended, then existing records in the control file will be overwritten before their intended reuse. This behavior is indicated by the following message in the alert log:
krcpwnc: following controlfile record written over:
The CONTROL_FILE_RECORD_KEEP_TIME
initialization parameter specifies the minimum number of days that must pass before a reusable record in the control file can be reused. Setting this parameter appropriately prevents log transport services from overwriting a reusable record in the control file and ensures redo information remains available on the standby database:
CONTROL_FILE_RECORD_KEEP_TIME
to a value that allows all on-disk backup information to be retained in the control file. CONTROL_FILE_RECORD_KEEP_TIME
specifies the number of days that records are kept within the control file before becoming a candidate for reuse.CONTROL_FILE_RECORD_KEEP_TIME
to a value slightly longer than the oldest backup file that you intend to keep on disk, as determined by the size of the backup area.
For example, if the backup area is sized to maintain two full backups that are taken every 7 days, as well as daily incremental backups and archived redo log files, then set CONTROL_FILE_RECORD_KEEP_TIME
to a value of 21 or 30. Records older than this will be reused. However, the backup metadata will still be available in the RMAN recovery catalog.
Make sure you specify a large enough value if an apply delay is also set for the standby database (described in Section 6.2.2). The range of values for this parameter is 0 to 365 days. The default value is 7 days.
See Oracle Database Reference for more details about the CONTROL_FILE_RECORD_KEEP_TIME
initialization parameter and Oracle Database Backup and Recovery Advanced User's Guide.
Use the DEPENDENCY
attribute of the LOG_ARCHIVE_DEST_
n initialization parameter to define one archival destination to receive redo data on behalf of several destinations, rather than transmitting redo data to each individual destination.
Figure 5-7 shows a Data Guard configuration in which the primary database transports redo data to one archiving destination that shares its archived redo log files with both a logical standby database and a physical standby database. These destinations are dependent on the successful completion of archival operations to the parent destination.
Text description of the illustration dependentdest.gif
Specifying a destination dependency can be useful in the following situations:
In these situations, although the ARCn processes do not physically archive the redo data to each standby destination, the standby destinations need to know the location of the archived redo log files. This allows the standby database to access the archived redo log files when they become available for application by log apply services. You must specify an archiving destination as being dependent on the success or failure of another (parent) destination.
An archive gap can occur on the standby system when it is has not received one or more archived redo log files generated by the primary database. The missing archived redo log files are the gap. If there is a gap, it is automatically detected and resolved by Data Guard by copying the missing sequence of log files to the standby destination. For example, an archive gap can occur when the network becomes unavailable and automatic archiving from the primary database to the standby database temporarily stops. When the network is available again, automatic transmission of the redo data from the primary database to the failed standby database resumes.
Data Guard requires no manual intervention by the DBA to detect and resolve such gaps. The following sections describe gap detection and resolution.
An archive gap can occur whenever the primary database archives a log locally, but the log is not received at the standby site. Every minute, the primary database polls its standby databases to see if there are gaps in the sequence of archived redo log files.
Gap recovery is handled through the polling mechanism. For physical and logical standby databases, Oracle Change Data Capture, and Oracle Streams, Data Guard performs gap detection and resolution by automatically retrieving missing archived redo log files from the primary database. No extra configuration settings are required to poll the standby databases, to detect any gaps, or to resolve the gaps.
The important consideration here is that automatic gap recovery is contingent on the availability of the primary database. If the primary database is not available and you have a configuration with multiple physical standby databases, you can set up additional initialization parameters so that the Redo Apply can resolve archive gaps from another standby database, as described in Section 5.8.3. See Section 10.8 for a scenario that shows how to resolve a gap manually.
Note: Prior to Oracle Database 10g Release 1, the FAL client and server were used to resolve gaps from the primary database. |
The fetch archive log (FAL) process resolves gaps detected in the range of archived redo log files generated at the primary database and received at the physical standby database.
The FAL mechanism handles the following types of archive gaps and problems:
The FAL client and server are configured using the FAL_CLIENT
and FAL_SERVER
initialization parameters that are set on the standby database. Define the FAL_CLIENT
and FAL_SERVER
initialization parameters only for physical standby databases in the initialization parameter file as shown in the following table:
In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.
The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP
view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP; THREAD# LOW_SEQUENCE# HIGH_SEQUENCE# ----------- ------------- -------------- 1 7 10
The output from the previous example indicates your physical standby database is currently missing log files from sequence 7 to sequence 10 for thread 1. After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1
):
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND 2> SEQUENCE# BETWEEN 7 AND 10; NAME -------------------------------------------------------------------------------- /primary/thread1_dest/arcr_1_7.arc /primary/thread1_dest/arcr_1_8.arc /primary/thread1_dest/arcr_1_9.arc
Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE
statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_7.arc'; SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_8.arc';
After you register these log files on the physical standby database, you can restart Redo Apply.
To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG
view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:
SQL> COLUMN FILE_NAME FORMAT a55 SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L 2> WHERE NEXT_CHANGE# NOT IN 3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#) 4> ORDER BY THREAD#,SEQUENCE#; THREAD# SEQUENCE# FILE_NAME ---------- ---------- ----------------------------------------------- 1 6 /disk1/oracle/dbs/log-1292880008_6.arc 1 10 /disk1/oracle/dbs/log-1292880008_10.arc
Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE
statement on your logical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';
After you register these log files on the logical standby database, you can restart SQL Apply.
This section contains the following topics:
This section describes using views to monitor redo log archival activity for the primary database. See Oracle Data Guard Broker and Oracle Enterprise Manager online help for more information about the graphical user interface that automates many of the tasks involved in monitoring a Data Guard environment
Enter the following query on the primary database to determine the current archived redo log file sequence numbers:
SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG 2> WHERE STATUS='CURRENT';
Enter the following query at the primary database to determine which archived redo log file contains the most recently transmitted redo data:
SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Enter the following query at the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ# 2> FROM V$ARCHIVE_DEST_STATUS 3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE'; DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ# ------------------ ------ ---------------- ------------- /private1/prmy/lad VALID 1 947 standby1 VALID 1 947
The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID
might identify an error encountered during the archival operation to that destination.
You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID
column of the V$ARCHIVE_DEST
fixed view on the primary database to identify each destination's ID number.
Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:
SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM 2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) 3> LOCAL WHERE 4> LOCAL.SEQUENCE# NOT IN 5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND 6> THREAD# = LOCAL.THREAD#); THREAD# SEQUENCE# --------- --------- 1 12 1 13 1 14
See Appendix A for details about monitoring the archiving status of the primary database.
To see the progression of the transmission of redo data to the standby destination, set the LOG_ARCHIVE_TRACE
parameter in the primary and standby initialization parameter files. See Appendix E for complete details and examples.
This section describes the wait events that monitor the performance of the log transport services that were specified on the primary database with the ARCH
, LGWR
, SYNC
, and ASYNC
attributes on the LOG_ARCHIVE_DEST_
n initialization parameter.
The following sections describe the wait events and associated timing information that are displayed by the V$SYSTEM_EVENT
view:
For ARCn archival processing, Table 5-3 shows the wait events that monitor the time it takes to write the redo data to the online redo log files on the primary database. See Section 5.3.1 for information about ARCn archival processing.
For LGWR
SYNC=NOPARALLEL
archival processing, Table 5-4 shows the wait events that monitor the time it takes for the LGWR process on the primary database to:
See Section 5.3.2 for information about LGWR
SYNC
archival processing.
For LGWR
ASYNC
archival processing, Table 5-5 shows the wait events that monitor the time it takes to write the redo data to the online redo log files on the primary database. See Section 5.3.2 for information about LGWR
ASYNC
archival processing.
When either the LGWR
and ASYNC
attributes or the LGWR
and SYNC=PARALLEL
attributes are in effect, the LGWR process archives to the local online redo log file and submits the redo data to one or more LNSn processes (one for each destination) that asynchronously transmit the redo data over the network. Table 5-6 shows the wait events that monitor the time it takes for the LGWR and LNSn processes to communicate over interprocess communication (IPC) channels. See Section 5.3.2.3 for more information about configurations using the LGWR and LNSn processes.
1 Only the QUOTA_SIZE
and QUOTA_USED
attributes cannot be specified when defining a destination for the flash recovery area. This is because the amount of space allocated for the flash recovery area is defined with the DB_RECOVERY_FILE_DEST_SIZE
parameter.
2 The managed recovery process (MRP) applies archived redo log files to the physical standby database and can start additional parallel execution (Pnnn) processes to balance workload.
3 The logical standby process (LSP) uses parallel execution (Pnnn) processes to apply archived redo log files to the logical standby database, using SQL interfaces.