Translation

The oldest posts, are written in Italian. If you are interested and you want read the post in English, please use Google Translator. You can find it on the right side. If the translation is wrong, please email me: I'll try to translate for you.

lunedì, maggio 08, 2017

Recovering Table, Part 4 - Note

There are two additional pieces of information I want to give you.
  1. The directories created, in order to restore the auxiliary database
  2. The configuration of rman script

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }


  • The directories that rman use for the "core" files
Into directory specified by "auxiliary destination" parameter, rman will create a directory named as the same name of the production database. In my case it was PCMSP. Here will be restored the "core files"

Under this directory, there will be 3 directory:



  • one for controlfile (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/controlfile/)
  • one for datafile (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/)
  • one for redlog online (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/)
The archived log, will be restored under PCMSP directory (the same level fo controlfile/datafile/onlinelog directory)

The list of "core" datafile. The archived logs are not shown, here.

  • The directories that rman use for the "application" files
At the same level of the directory where rman will restore the core files (auxiliary destination=/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp) will be created a new directory (DSEK_PITR_PCMSP in my case) where the "application" datafiles will go.

core files => /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/
application files => /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DSEK_PITR_PCMSP/

where

  • DSEK is a prefix put by rman
  • PITR is for Point In Time to Recovery
  • PCMSP is the name fo database

The "core" directory (PCMS) and "application" directory (DESK_PITR_PCMSP), are created at the same level. Also the archived log will be restored to the same level. Here are shown the archived log needed for the recovery of "application" tablespace
Summarizing what I said, the tree of directy is:

<auxiliary destination>
     |
     -> archived log for "core" part
     -> archived log for "application" part
     -<db_name>
            |
            -> controlfile dir
                  |
                  -> controlfile
            -> onlinelog dir
                  |
                  -> redolog file online
            -> datafile dir
                  |
                  -> SYSTEM tbs
                  -> SYSAUX tbs
                  -> TEMP tbs
                  -> all UNDO tbs
     -<pitr_db_name>
            |
            -> datafile for "application" part


  • Channel configuration
Another important thing to show is the script I used. How you can see, I'm not using the allocation of a channel. In my case, the backup is on tapes. If you try to allocate a channel, like this (I use Legato Networker. This is why all env parameters starts with "NSR_")

run {
2> ALLOCATE CHANNEL c1 TYPE 'SBT_TAPE'
PARMS 'ENV=(NSR_SERVER=bck_server, NSR_CLIENT=bck_client, NSR_GROUP=db_group, NSR_DATA_VOLUME_POOL=db_pool_1M)'; 
3>
4> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
5> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
6> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
7> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
8> }


then you get following error:

RMAN06034: at least 1 channel must be allocated to execute this command

The cause and solution are explained in this metalink note: "Recover Table Fails with RMAN06034:
at least 1 channel must be allocated to execute this command (Doc ID 2105325.1)":


You cannot manually allocate a channel when RMAN will implicitly create the auxiliary instance, the channel specification must be configured in the target database. This is done with:

configure channel sbt_tape FORMAT '%U' PARMS='ENV=(NB_ORA_CLIENT=......, NB_ORA_POLICY=............., NB_ORA_SCHED=.....)';


So, I have to connect to the target and to the catalog/controlfile and run the "configure channel" command:

$rman
RMAN> connect target /
RMAN> configure channel sbt_tape FORMAT '%U' PARMS='ENV=(NLS_CLIENT=......, NLS_SERVER=.....)';

Using Legato Networker, the only two parameters that you have to use for the restore are NSR_CLIENT and NSR_SERVER. The tape number is saved into catalog/controlfile.


  • Other recovery scenarios

As told in Part 1, there are several cases to recover table. I report two example from the manual pages.

The first one. The recover generate just a dump file. The import will not be done.

RECOVER TABLE SCOTT.EMP, SCOTT.DEPT
    UNTIL TIME 'SYSDATE-1'
    AUXILIARY DESTINATION '/tmp/oracle/recover'
    DATAPUMP DESTINATION '/tmp/recover/dumpfiles'
    DUMP FILE 'emp_dept_exp_dump.dat'
    NOTABLEIMPORT;

The second one. For the PITR, the recover will use the UNTIL SEQUENCE and it'll remap also the tablespace.

RECOVER TABLE SH.SALES:SALES_1998, SH.SALES:SALES_1999
    UNTIL SEQUENCE 354
    AUXILIARY DESTINATION '/tmp/oracle/recover'
    REMAP TABLE 'SH'.'SALES':'SALES_1998':'HISTORIC_SALES_1998',
              'SH'.'SALES':'SALES_1999':'HISTORIC_SALES_1999' 
    REMAP TABLESPACE 'SALES_TS':'SALES_PRE_2000_TS';



Recovering Table, Part 1 - Introduction
Recovering Table, Part 2 - The "Core" part
Recovering Table, Part 3 - The "Application" part

Recovering Table, Part 3 - The "Application" part

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }

The recovery table phase is split into 2 logical steps:

  1. Restore the core part
  2. Restore the application part


  • Restore the "Application" part

During this step, RMAN will restore:

  • Tablespaces where the table is located
  • Archived log
Transparently rman will run following scripts:

##########################
Restore the "application" tablespace
##########################

In my case, there is just a tablespace

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  13 to new;
set newname for datafile  14 to new;
set newname for datafile  17 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  13, 14, 17;

switch clone datafile all;
}

where

00013 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf
00014 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf
00017 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf


###########################
Recover the "application" tablespace
###########################

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  13 online";
sql clone "alter database datafile  14 online";
sql clone "alter database datafile  17 online";
# recover and open resetlogs
recover clone database tablespace  "PCMS_DATA", "SYSTEM", "UNDOTBS1", "UNDOTBS2", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

How you can see into log file, also in this case, it is necessary to restore the Archived Log.

##########################
Creation of expdp/impdp directory
##########################

{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp''";
}

#############
expdp of the table
#############

After the Database recovery, it start the table export...:

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_DgAf_BngC":  
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 7 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "PCMS_ADM"."PIANIFICAZIONE_MANUTENZIONE"    5.110 MB   34394 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_DgAf_BngC" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_DgAf_BngC is:
   EXPDP>   /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/tspitr_DgAf_81953.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_DgAf_BngC" successfully completed at Thu May 4 15:51:03 2017 elapsed 0 00:00:29
Export completed


{
# shutdown clone before import
shutdown clone abort
}

###########
impdp of table
###########

...and the table import:

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_DgAf_gvec" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_DgAf_gvec":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "PCMS_ADM"."PIANIFICAZIONE_MAN_BCK"         5.110 MB   34394 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_DgAf_gvec" successfully completed at Thu May 4 15:51:14 2017 elapsed 0 00:00:04
Import completed

How you can see, after the expdp command is completed, because the auxilyary database it's not necessary anymore. it will be closed,


#########
The last step
#########

After importing the table, all restored files ("core" filse, "application" files and archived logs), will be deleted.

auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_temp_djpckr57_.tmp deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_temp_djpckr70_.tmp deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_8_djpddy16_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_6_djpddo9z_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_4_djpdddof_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_2_djpdd443_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_7_djpddsr2_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_5_djpddk23_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_3_djpdd893_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_1_djpdczsp_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpcndlt_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpd013l_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpcop6k_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_djpcc7lb_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_djpc4pyw_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_djpcbo4t_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_djpc9bnh_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_djpc8t8l_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_djpc7rts_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_djpc79bp_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_djpc65fy_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/controlfile/o1_mf_djpc35l1_.ctl deleted
auxiliary instance file tspitr_DgAf_81953.dmp deleted

Recovering Table, Part 1 - The "Core" part
Recovering Table, Part 2 - The "Application" part
Recovering Table, Part 4 - Note

Recovering Table, Part 2 . The "Core" part

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }

The recovery table phase is split into 2 logical steps:
  1. Restore the "core" part
  2. Restore the "application" part
The log of the recover is here.
  • Restore the Core part

During this step, RMAN will restore an auxiliary database with following components:
  • SYSTEM tablespace
  • SYSAUX tablespace
  • UNDO tablespaceze
  • TEMP tablespace
  • Archived log

Regardless of whether the database is RAC or not,the auxiliary database will be a single instance. In my case, the database (RAC) name is "PCMSp" and the 2 instances are "PCMSp1" and "PCMSp2". The auxiliary database will have these parameters

SID='DgAf'
db_name=PCMSP
db_unique_name=DgAf_pitr_PCMSP

(these parameters will be chosen automatically by rman). Also, will be restored the UNDO tablespaces of all instances (in my case, 2 instances mean 2 UNDO tablespaces). Transparently rman will run following scripts:

################
Restore the controlfile
################

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log 
sql 'alter system archive log current';
}

###################
Restore the "core" datafile
###################

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  2 to new;
set newname for clone datafile  5 to new;
set newname for clone datafile  6 to new;
set newname for clone datafile  7 to new;
set newname for clone datafile  8 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  4 to new;
set newname for clone tempfile  1 to new;
set newname for clone tempfile  2 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 2, 5, 6, 7, 8, 3, 4;

switch clone datafile all;
}

Just to be clear

00001 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_%u_.dbf
00002 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_%u_.dbf
00003 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_%u_.dbf
00004 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_%u_.dbf
00005 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_%u_.dbf
00006 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_%u_.dbf
00007 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_%u_.dbf
00008 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_%u_.dbf


#############
Recover database
#############

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  5 online";
sql clone "alter database datafile  6 online";
sql clone "alter database datafile  7 online";
sql clone "alter database datafile  8 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  4 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "UNDOTBS2", "SYSAUX";
sql clone 'alter database open read only';
}

During this step, the archived logs will be restored. After the archived logs are applied, the “DgAf” (the auxiliary) database can be opened and the restore of the "application" part can  start.

####################
Start of Auxiliary database
####################

sql statement: alter database open read only
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files = 
  ''/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/controlfile/o1_mf_djpc35l1_.ctl'' comment=
''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}

Recovering Table, Part 1 - Introduction

Recovery a single table/table partition is a 12c new feature.

In order to do this, there are 5 steps that Oracle does. From the manual pages:

  1. Determines which backup contains the tables or table partitions that need to be recovered, based on the point in time specified for the recovery.
  2. Creates an auxiliary database and recovers the specified tables or table partitions, until the specified point in time, into this auxiliary database. You can specify the location to which the recovered data files are stored in the auxiliary database.
  3. Creates a Data Pump export dump file that contains the recovered tables or table partitions. You can specify the name and the location of the export dump file used to store the metadata of the recovered tables or table partitions.
  4. (Optional) Imports the Data Pump export dump file into the target instance. You can choose not to import the export dump file that contains the recovered tables or table partitions into the target database. If you do not import the export dump file as part of the recovery process, you must manually import it later using the Data Pump Import utility.
  5. (Optional) Renames the recovered tables or table partitions in the target database.

The step 4 is optional because you can avoid importing the table. In this case, you have to import it manually. The step 5 is optional because you can avoid renaming the imported table. In this case, the table will be overwritten.

In my example (it's a real production case) I made "import table" and "remap table".

The rman scrpit I used is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }

Because of step 2 (creates an auxiliary database), you have to calculate space needed before the table recovery. The recovery process is split into 2 logically phases:
  1. Restore the core part
  2. Restore the application part
For the "core part" you have to calculate:

  • SYSTEM tablespace size
  • SYSAUX tablespace size
  • UNDO tablespace size
  • TEMP tablespace size
  • archived log size

while for the "application part":

  • tablcespace size on which the table is located
  • (optionally) tablespace size on which the lob is located
  • archived log size

In order to calculate the total space, I wrote this query:

select sum(bytes)/1024/1024/1024 gbytes from (
SELECT round(sum(bytes)/1024/1024/1024,2) gbytes 
  FROM (
    -- SYS% 
    SELECT bytes FROM dba_data_files WHERE tablespace_name in ('SYSTEM','SYSAUX')
    UNION ALL
    -- UNDO
    SELECT bytes FROM dba_data_files ddf JOIN dba_tablespaces dt
        ON (ddf.tablespace_name=dt.tablespace_name) WHERE dt.contents= 'UNDO'
    UNION ALL
    SELECT bytes FROM dba_temp_files
    UNION ALL
    -- tables's datafiles 
    SELECT bytes FROM dba_data_files WHERE tablespace_name in (
       SELECT tablespace_name FROM dba_segments  WHERE owner='&&owner' AND segment_name='&&tabname')
    UNION ALL
    -- lob's datafiles
    SELECT bytes FROM dba_data_files WHERE tablespace_name in (
       SELECT tablespace_name FROM dba_segments WHERE (owner, segment_name) in (
          SELECT owner, segment_name FROM dba_lobs WHERE owner = '&&owner' AND segment_name='&tabname'))
    UNION ALL
    SELECT output_bytes FROM v$rman_backup_job_details WHERE end_time >= (
    -- Archived log
    SELECT input_bytes FROM v$rman_backup_job_details 
     WHERE end_time <= to_date('&pitr','dd/mm/yyyy hh24:mi:ss')
       AND end_time > (SELECT max(end_time) FROM v$rman_backup_job_details WHERE input_type='DB FULL')
 );

In the next posts, I'm going to show the 2 steps of recovery table.

Recovering Table, Part 2 - The "Core" part
Recovering Table, Part 3 - The "Application" part
Recovering Table, Part 4 - Note

Update

2017/Nov/11 - Modified the select statement in order to correct some of bugs.

sabato, maggio 06, 2017

SQL Profile: Case 2 - Part 2: Data Dictionary

SQL Profile: Case 2 - Part 1: Create/Execute/Report/Accept

All informations about SQL Profile, are saved into Data Dictionary. Obiously.

Those are some DBA_ views from the 12cR2 reference manual:

dba_advisor_actions
dba_advisor_recommendations
dba_advisor_findings
dba_advisor_rationale

Using this query you can read somethings

col owner for a10
col action_message for a20
col finding_message for a20
col more_info for a30
col finding_name for a20
col RATIONALE_MESSAGE for a25
col ATTR5 for a40
col raccomandation_type for a15
col RATIONAL_TYPE for a15
set pages 99

SELECT 
      a.owner
    , t.rationale_id
    , a.message action_message
    , r.type raccomandation_type
    , f.finding_name
    , f.type finding_type
    , f.message finding_message
    , f.more_info
    , t.message rationale_message
    , t.type rational_type
    , t.attr5
FROM 
      dba_advisor_actions         a
    , dba_advisor_recommendations r
    , dba_advisor_findings        f
    , dba_advisor_rationale       t
WHERE
     a.task_name  = 'task_select'
AND  a.task_id    = r.task_id
AND  a.rec_id     = r.rec_id
AND  a.task_id    = t.task_id
AND  a.rec_id     = t.rec_id
AND  f.task_id    = r.task_id
AND  f.finding_id = r.finding_id
ORDER BY
     t.rationale_id;

You can find the output here (quey 1, section).

Interesting thing is that SQL Profile is just a "HINT" saved into Data Dictionary. This HINT is visible queryng SQLOBJ$DATA, table.

col outline_hints for a30
select hint as outline_hints
   from (select p.name, p.signature, p.category, row_number()
         over (partition by sd.signature, sd.category order by sd.signature) row_num,
         extractValue(value(t), '/hint') hint
         from sqlobj$data sd, dba_sql_profiles p,
              table(xmlsequence(extract(xmltype(sd.comp_data),
                                  '/outline_data/hint'))) t
                                    where sd.obj_type = 1
   and p.signature = sd.signature
   and p.category = sd.category
   and p.name like ('profile_7srkyyv9jxhzm'))
   order by row_num;  

OUTLINE_HINTS
------------------------------
PARALLEL


Reference.

You can find more about HINTs of SQL Profile, in a Christian Antognini's pdf (page 22), and in him another post. In the pdf Antognini show the reletionship between data dictionary views. Keep in mind that "As of Oracle Database 11g the previous query can no longer be used. In fact, the data dictionary has been changed. The tables SQLPROF$ and SQLPROF$ATTR no longer exist. As of Oracle Database 11g the information is stored in the tables SQLOBJ$ and SQLOBJ$DATA".


Also, there are a lot of post by Kerry Osborne. Here I link just 3 of them:

How to Lock SQL Profiles Generated by SQL Tuning Advisor  (November 30, 2010)
Single Hint SQL Profiles (February 1, 2010)
SQL Profile (April 4, 2009)


SQL Profile: Case 2 - Part 1: Create/Execute/Report/Accept

SQL Profile: Case 2 - Part 1: Create/Execute/Report/Accept

SQL Profile: Case 2 - Part 2: Data Dictionary

Case 2: The statement is in SQL Area but it has a wrong execution plan

This case is like the previous one. The difference is that the statement is already run. First you have to run the DBMS_SQLTUNE.CREATE_TUNING_TASK procedure.

I remember that the 4 steps are:

(a) Create a tuning task
(b) Execute the tuning task
(c) Report the tuning task
(d) Accept the tuning task

Respect to the previous case, now the procedure that you have to run is:

declare
  task_name VARCHAR2(30);
  begin
     task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
     sql_id => '&sqlid',
     scope => 'COMPREHENSIVE',
     time_limit => 30, -- nr di secondi di analisi
     task_name => 'task_&sqlid',
     description => 'Task to tune a query');
end;
/

As you can see, we loose "sqltext" and "user_name" parameter. This is because now you have the sql_id (instead of sqltext) and the statement is not related to any user.

For this example, I use "&sqlid" so I generalize the pl/sql code. And, in order to show how this case and previous one are really close, I'm going to use the same statement:

select 
      ooo.owner
    , oao.object_name 
from 
      OSL_ALL_OBJECTS   oao 
join  OSL_OWNER_OBJECTS ooo 
on
      (ooo.object_id = oao.object_id);

Obviously, I start in a clear environment, where there is neither SQL Profile nor Tuning Task. You can find the tables and index definitions, here.

Looking for the sql_id in sql_area, I find:

SQL> select sql_id, sql_text from v$sql where sql_text like '%ooo%'

gwwazjphk20wc      3252867984
select       ooo.owner     , oao.object_name from       OSL_ALL_OBJECTS   oao jo
in  OSL_OWNER_OBJECTS ooo on       (ooo.object_id = oao.object_id)

The output of the explain plan is here: there are 2 FULL TABLE ACCESS.

------------------------------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |       |       |       | 16476 (100)|          |
|   1 |  HASH JOIN         |                   |  9999K|   371M|  7712K| 16476   (1)| 00:00:03 |
|   2 |   TABLE ACCESS FULL| OSL_OWNER_OBJECTS |   262K|  4613K|       |   113   (1)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| OSL_ALL_OBJECTS   |  9999K|   200M|       |  5516   (1)| 00:00:01 |
------------------------------------------------------------------------------------------------


Step a) Create a tuning task

DECLARE
    task_name VARCHAR2(30);
BEGIN
    task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
        sql_id => 'gwwazjphk20wc',
        scope => 'COMPREHENSIVE',
        time_limit => 30, -- nr of seconds of analisys
        task_name => 'task_select',
        description => 'Task to tune a query');
END;
/


The "task_name" is just a name that I was given. You have to choose a different name (if you want).


Step b) Execute the tuning task

BEGIN
    DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name => 'task_select');
END;
/

Step c) Report the tuning task

set long 10000
set longchunksize 1000
set linesize 230
set heading off
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('task_select') from DUAL;
set heading on

You can find the full output here. Accepting the reccomandation, the new plan will be

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                   |  9999K|   371M|  3136   (1)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR       |                   |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM) | :TQ10000          |  9999K|   371M|  3136   (1)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN          |                   |  9999K|   371M|  3136   (1)| 00:00:01 |  Q1,00 | PCWP |            |
|   4 |     TABLE ACCESS FULL | OSL_OWNER_OBJECTS |   262K|  4613K|    63   (2)| 00:00:01 |  Q1,00 | PCWP |            |
|   5 |     PX BLOCK ITERATOR |                   |  9999K|   200M|  3063   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |      TABLE ACCESS FULL| OSL_ALL_OBJECTS   |  9999K|   200M|  3063   (1)| 00:00:01 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------

Step d) Accept the tuning task

In the "SQL Profile Finding" section, you find the command that  you have to run, if you want to accept the reccomandation:

execute dbms_sqltune.accept_sql_profile( -
    , task_name => 'task_select' -
    , task_owner => 'ASALZANO' -
    , replace => TRUE
    , profile_type => DBMS_SQLTUNE.PX_PROFILE);


 I prefere to give to the SQL Profile a my name, so I mofify a little the previous statement:

execute dbms_sqltune.accept_sql_profile( -
      task_name => 'task_select'         -
    , task_owner => 'ASALZANO'           -
    , replace => TRUE                    -
    , name => 'profile_7srkyyv9jxhzm'    -
    , profile_type => DBMS_SQLTUNE.PX_PROFILE);


At this point it will use the SQL Profile:

SQL> set autot trace exp
SQL> select
      ooo.owner
    , oao.object_name
from
      OSL_ALL_OBJECTS   oao
join  OSL_OWNER_OBJECTS ooo
on
      (ooo.object_id = oao.object_id);  2    3    4    5    6    7    8

Execution Plan
----------------------------------------------------------
Plan hash value: 3357147682

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                   |  9999K|   371M|  3136   (1)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR       |                   |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM) | :TQ10000          |  9999K|   371M|  3136   (1)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN          |                   |  9999K|   371M|  3136   (1)| 00:00:01 |  Q1,00 | PCWP |            |
|   4 |     TABLE ACCESS FULL | OSL_OWNER_OBJECTS |   262K|  4613K|    63   (2)| 00:00:01 |  Q1,00 | PCWP |            |
|   5 |     PX BLOCK ITERATOR |                   |  9999K|   200M|  3063   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |      TABLE ACCESS FULL| OSL_ALL_OBJECTS   |  9999K|   200M|  3063   (1)| 00:00:01 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("OOO"."OBJECT_ID"="OAO"."OBJECT_ID")

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2
   - SQL profile "profile_7srkyyv9jxhzm" used for this statement


Delete/Disable/Modify SQL Profile

If you wnat to drop or modify the SQL Profile and/or the Tuning task, you have to run DBMS_SQLTUNE package. For my example:


  • Drop the Tuning task
exec DBMS_SQLTUNE.DROP_TUNING_TASK ('task_select')


  • Drop the SQL Profile
exec DBMS_SQLTUNE.DROP_SQL_PROFILE ('profile_7srkyyv9jxhzm')


  • Disable (modify) the SQL Profile
exec DBMS_SQLTUNE.ALTER_SQL_PROFILE (  -
      name =>  'profile_7srkyyv9jxhzm' -
    , attribute_name => 'STATUS'       -
    , value =>  'DISABLED')

SQL Profile: Case 2 - Part 2: Data Dictionary