3. The Hot Backup.
The ARCHIVELOG function that came in OracleV6 opened up a whole new set of possible backup options, but also caused a lot of heartache until they got it working right.
Good parts about Archivelog:
It gives you 24-hour availability.
It gives you point-in-time recovery.
It allows you to restore without losing any data at all.
It lets you keep the database up, preserving the data in your SGA.
Bad parts about Archivelog:
Administration can be difficult.
Repetitive failed load attempts can create massive logs.
When it gets stuck, the database comes to a halt.
Implementing Archivelog:
To start using Archivelog, simply alter your startup command file:
startup mount exclusive
;connect internal;
alter database archivelog;
archive log start;alter database open;
and startup the database. From then on, the database will be in Archivelog mode until you turn it off (alter database noarchivelog). Note that a failed transaction is still a transaction; Archivelog keeps track of rollback segment extents, so rolled back inserts or deletes affect it just like completed transactions. Turn it off (shutdown, restart with noarchivelog mode) before doing big loads from SQL*Loader.
Another option for controlling the amount of archived redo log entries generated is to use the UNRECOVERABLE keyword, available as of 7.2. UNRECOVERABLE can be used with the CREATE INDEX and CREATE TABLE AS SELECT commands; if it is used, then the transaction will not generate redo entries.
When you start archiving, archived logs will be written every time the redo operation is about to overwrite a previously written logfile. It will write it to the directory indicated by the log_archive_dest parameter in your init.ora file. They will all be the same size (in V6; V7 can have variably sized archive logs) as your redo logs. They will increase in number until they run out of space on their destination device. At that point the database will freeze until you clear more space for them in the log_archive_dest location. SO, have a second location ready to receive them.
Backup strategy for Archivelogs:
1. Do each tablespace one at a time. That is, rather than setting them all offline, then backing them up, then setting them back online, do them each separately. You don't want to risk having a system crash while the entire database is in begin backup state; recovery is a mess. Minimize your window of vulnerability by having only one tablespace in backup state at any one time.
If you are using RAID devices or mirrored disks, in which the loss of a single disk does not cause the loss of a file, then you can simplify your backup procedures by taking all of the tablespaces offline at once, and backing them up as a set.
2. Before you backup the control file, force an archive log switch. This will update the header information in the control file.
3. Don't do it during user activity. When in backup state, a tablespace's activity is still written to the archive logs. However, it's written block-by-block rather than byte-by-byte. So changing one record in a tablespace that's being backed up will result in that record's entire block being written to the archive area. NOTE: This is correct only for those platforms where the physical sector size is less than the Oracle logical block size. On systems where the physical disk transfer size is equal to the Oracle block size, then we do not incur the penalty of having to log the entire block. This is true for MVS, VM, and perhaps other systems.
Sample Archive log command file for VMS:
NOTE: See Rama's book for a more complete script; the code is available at the Web site.
$ dup = "backup/ignore=(noback,interl,label)/log"
$ sqldbaCONNECT INTERNALalter tablespace system begin backup;exit
$ dup u01:[oracle]ora_system.dbs tape1:ora_system.bck/sav
$ sqldbaCONNECT INTERNALalter tablespace system end backup;alter tablespace appl1 begin backup;exit
$ dup u02:[oracle]appl1.dbs tape1:appl1.bck/sav
$ sqldbaCONNECT INTERNALalter tablespace appl1 end backup;exit
$!$! get archive logs
$ rename/log u03:[oracle.arcs]*.arc *.arclogs
$ rename/log u04:[oracle.arcs2]*.arc *.arclogs !secondary arcs dir
$ sqldbaCONNECT INTERNALalter system switch logfile;exit
$ dup u03:[oracle.arcs]*.arclogs,u04:[oracle.arcs2]*.arclogs tape1:logs.bck/sav
$ del/log u03:[oracle.arcs]*.arclogs;0
$ del/log u04:[oracle.arcs2]*.arclogs;0$!$! get control file
$ sqldbaCONNECT INTERNALalter database backup controlfile to 'u01:[oracle]control.bkp' reuse;exit
$ dup u01:[oracle]control.bkp tape1:control.bck/sav
Note: The "alter system switch logfile" command is all but undocumented, (see pg 3-15 of the DBA guide. It refers you to a nonexistent cross-reference). It will NOT show up in the alert log. Don't be alarmed by that; it does actually work.
NEW NOTE: Some people modify the script above to automatically pull the file names from DBA_DATA_FILES, so they don't have to hardcode the file names. I would only do that if the datafiles were all mirrored, so that a media failure wouldn't take down my database.
Integrating the three methods.
Use hot backups for all of your transaction-critical data. As a backup to the hot backups, perform cold backups periodically. As a backup to the physical file backups, use Exports.
If you have a large amount of static data, you may wish to export only certain small tables, while relying on loading programs to re-create the static data when needed. As your database grows larger, the time required to perform Imports will remove Export as a viable backup option for all but the smallest of your tables.
If at all possible in your O/S, it is also a good idea to shadow the disks on which your realtime and archived redo logs reside.
Thursday, July 10, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment