Translation

The oldest posts, are written in Italian. If you are interested and you want read the post in English, please use Google Translator. You can find it on the right side. If the translation is wrong, please email me: I'll try to translate for you.

venerdì, luglio 14, 2017

SPM, Part 02: Adaptive Cursor Sharing


This post is a second part of two.
It's a copy and paste from several links that you find in the"Reference" section.


Adaptive Cursor Sharing

Oracle introduced the bind peeking feature in Oracle 9i. With bind peeking, the Optimizer peeks at the values of user-defined bind variables on the first invocation of a cursor. This allows the optimizer to determine the selectivity of any WHERE clause condition as if literals have been used instead of bind variables, thus improving the quality of the execution plan generated for statements using bind variables.

However, there was a problem with this approach, when the column used in the WHERE clause with the bind contained a data skew. If there is data skew in the column, it is likely that a histogram has been created on this column during statistics gathering. When the optimizer peeks at the value of the user-defined bind variable and chooses a plan, it is not guaranteed that this plan will be good for all possible values for the bind variable. In other words, the plan is optimized for the peeked value of the bind variable, but not for all possible values.

In 11g, the optimizer has been enhanced to allow multiple execution plans to be used for a single statement that uses bind variables. This ensures that the best execution plan will be used depending on the bind value.

 A cursor is marked bind sensitive (means “been monitored”) if the optimizer believes the optimal plan may depend on the value of the bind variable. When a cursor is marked bind sensitive, Oracle monitors the behavior of the cursor using different bind values, to determine if a different plan for different bind values is called for (Oracle observes the cursors for a while and sees how the values differ. If the different values can potentially alter the plan, the cursor is labeled "Bind-Sensitive" and the column IS_BIND_SENSITIVE shows "Y". After a few executions, the database knows more about the cursors and the values and decides if the cursor should be made to change plans based on the values. If that is the case, the cursor is called "Bind-Aware" and the column IS_BIND_AWARE shows "Y". In summary: Bind-Sensitive cursors are potential candidates for changed plans and Bind-Aware ones are where the plans actually change. A new view V$SQL_CS_HISTOGRAM shows how many times the SQL statement was executed, organized into three buckets for each child cursor).

This cursor was marked bind sensitive because the histogram on the "deptno" column, for example, was used to compute the selectivity of the predicate "where deptno = :deptno". Since the presence of the histogram indicates that the column is skewed, different values of the bind variable may call for different plans.

Behind the scenes during the first two executions, Oracle was monitoring the behavior of the queries, and determined that the different bind values caused the data volumes manipulated by the query to be significantly different.  Based on this difference, Oracle "adapts" its behavior so that the same plan is not always shared for this query. But if we execute the query again with a more selective bind value.

 If we look at the execution statistics again, there are three cursors now:

select child_number, executions, buffer_gets, is_bind_sensitive, is_bind_aware, is_shareable from v$sql where sql_text like '%deptno = :deptno%';

The original cursor was discarded when the cursor switched to bind aware mode (means: selectivity of predicates during a soft parse determine which optimal plan to use among the ones in memory). This is a one-time overhead. Note that the cursor is marked as not shareable (is_shareable is "N"), which means that this cursor will be among the first to be aged out of the cursor cache, and that it will no longer be used.  In other words, it is just waiting to be garbage collected.

There is one other reason that you may notice additional cursors for such a query in 11g.  When a new bind value is used, the optimizer tries to find a cursor that it thinks will be a good fit, based on similarity in the bind value's selectivity.  If it cannot find such a cursor, it will create a new one (like above, when one (#1) was created for unselective "10" and one (#2) was created for highly-selective "9").  If the plan for the new cursor is the same as one of the existing cursors, the two cursors will be merged, to save space in the cursor cache.  This will result in one being left behind that is in a not shareable state.  This cursor will be aged out first if there is crowding in the cursor cache, and will not be used for future executions.

So it requires two mechanisms: one to control how to change state from “sensitive” to “aware” and another to select a plan in memory as per the values of the binds. Column V$SQL_CS_STATISTICS.ROWS_PROCESSED seems to drive this mechanism. This column is updated during a hard parse and it is a fuzzy representation of the amount of data the query manipulates during its execution. For small values of V$SQL_CS_STATISTICS.ROWS_PROCESSED we increment by one V$SQL_CS_HISTOGRAM.BICKED_ID(0). For medium values we increase by one V$SQL_CS_HISTOGRAM.BICKED_ID(1). And for large values we do V$SQL_CS_HISTOGRAM.BICKED_ID(2). Cursor Sharing histogram buckets 0-2 are updated on every execution of a bind sensitive query. They represent data volume manipulation of every execution. If there are significant variances then the query would benefit of ACS and it becomes Bind Aware.

There are a couple of scenarios where the values stored on these 3 buckets cause the cursor to become bind aware:

  1. When two of the buckets have the same value, and this value is not zero
  2. When the smallest and the largest buckets (0 and 2) have a value other than zero

Reference

SQL Plan Management (Part 1 of 4) Creating SQL plan baselines (Maria Colgan)
SQL Plan Management (Part 2 of 4) Creating SQL plan baselines (Maria Colgan)
SQL Plan Management (Part 3 of 4) Creating SQL plan baselines (Maria Colgan)
SQL Plan Management (Part 4 of 4) Creating SQL plan baselines (Maria Colgan)
How do adaptive cursor sharing and SQL Plan Management interact? (Unknown)
Why are there more cursors in 11g for my query containing bind variables? (Unknown)
Oracle Optimizer and SPM plan interaction (Mohamed Houri)
Stage and Fix SQL Statement (Unknown)

dbms_sqldiag (Jonathan Lewis)

SQL Patch I (Dominic Brooks)
SQL Patch II (Dominic Brooks)
SQL Patch III (Dominic Brooks)
SQL Patch IV (Dominic Brooks)
Adding and Disabling Hints Using SQL Patch (Nigel Bayliss)
Using SQL Patch to add hints to a packaged application (Unknown)

Adaptive Cursors and SQL Plan Management (Arup Nanda)
How a Cursor becomes Bind Aware? (Carlo Sierra)
Adaptive Cursor Sharing with SQL Plan Baselines (Dominic Brooks)

SPM, Part 01: SQL Patch, SQL Profile and Baseline


This post is a first part of two.
It's a copy and paste from several links that you find in the "Reference" section.


SQL Plan Management

SPM maintains, on disk, a plan history consisting of different execution plans generated for each managed SQL statement. An enhanced version of the Oracle optimizer, called SPM aware optimizer, accesses, uses, and manages this information which is stored in a repository called the SQL Management Base (SMB).

The plan history enables the SPM aware optimizer to determine whether the best-cost plan it has produced using the cost-based method is a brand new plan or not. A brand new plan represents a plan change that has potential to cause performance regression. For this reason, the SPM aware optimizer does not choose a brand new best-cost plan. Instead, it chooses from a set of accepted plans. An accepted plan is one that has been either verified to not cause performance regression or designated to have good performance. A set of accepted plans is called a SQL plan baseline, which represents a subset of the plan history.

After you create a SQL plan baseline for a statement, subsequent executions of that statement will use the SQL plan baseline. From all the plans in the SQL plan baseline, the optimizer will select the one with the best cost in the current environment (including bind values, current statistics, parameters, etc.). The optimizer will also generate the best-cost plan that it would otherwise have used without a SQL plan baseline. However, this best-cost plan will not be used but instead added to the statement's plan history for later verification. In other words, the optimizer will use a known plan from the SQL plan baseline instead of a new and hitherto unknown plan.

It is possible for a SQL statement to have a stored outline as well as a SQL plan baseline. If a stored outline exists for a SQL statement and is enabled for use, then the optimizer will use it, ignoring the SQL plan baseline. In other words, the stored outline trumps a SQL plan baseline.

Adaptive cursor sharing (ACS) may generate multiple cursors for a given bind sensitive SQL statement if it is determined that a single plan is not optimal under all conditions. Each cursor is generated by forcing a hard parse of the statement. The optimizer will normally select the plan with the best cost upon each hard parse.

When you have a SQL plan baseline for a statement, the SPM aware optimizer will select the best accepted plan as the optimal plan. This also applies for the hard parse of a bind sensitive statement. There may be multiple accepted plans, each of which is optimal for different bind sets. With SPM and ACS enabled, the SPM aware optimizer will select the best plan for the current bind set.
Thus, if a hard parse occurs, the normal SPM plan selection algorithm is used regardless of whether a statement is bind sensitive.

Remember that they are responsible for two different tasks. ACS controls whether or not a child cursor is shared on a particular execution.  For each execution of the query, ACS considers the current bind values and decides if an existing child cursor can be shared or if the optimizer should be given the chance to find a better plan for the current bind values.  SPM controls which plans the optimizer may choose. If a child cursor is bind-aware, the decision to share or not is made irrespective of whether the query is controlled by SPM. But once the query and its current bind values are sent to the optimizer for optimization, SPM constrains the optimizer's choice of plans, without regard to whether this query is being optimized due to ACS.


My Note

SQL, with bind variable, run -> Parsing is done, peeking the content -> ACS is active, so a new execution plan is chosen -> Baseline is working, so the plan is compared with the one inside the "Plan Baseline" -> If the plan exists and is accepted, it will be selected and the original plan will not run, otherwise it will be added to the plan history and the original will run (not optimized. Well in this case I tell that the orignial plan was not optimized).


Queryng the dba_sql_plan_baselines (for example):

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;

SQL_TEXT                                 PLAN_NAME                      ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825  YES NO
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d  YES YES

The 'NO' value for the accepted column implies that the new plan is in the plan history but is not available for use until it is verified to be a good plan. The optimizer will continue to use an accepted plan until new plans are verified and added to the SQL plan baseline. If there is more than one plan in the SQL plan baseline, the optimizer will use the one with the best cost under the then-current conditions (statistics, bind values, parameter settings and so on).
When you create a SQL plan baseline for a SQL statement, the SPM aware optimizer thus guarantees that no new plans will be used other than the ones in the SQL plan baseline. This prevents unexpected plan changes that sometimes lead to performance regressions.

A SQL statement can have both a SQL profile and a SQL plan baseline. In this case, the SPM aware optimizer will use both the SQL profile and the SQL plan baseline. The SQL profile contains additional information that helps the optimizer to accurately cost each accepted plan and select the best one. The SPM aware optimizer may choose a different accepted plan when a SQL profile is present than when it is not.

When we accepted the recommended SQL profile, the SQL Tuning Advisor created a SQL profile and also changed the non-accepted plan to accepted status, thus evolving the SQL plan baseline to two plans. Note that the SQL Tuning Advisor may also find a completely new tuned plan, one that is not in the plan history. If you then accept the recommended SQL profile, the SQL Tuning Advisor will create a SQL profile and also add the tuned plan to the SQL plan baseline.

A SQL statement can also have both a SQL Patch and a SQL plan baseline.

SQL Baselines exist to reproduce a specific plan. In fact, a plan hash exists as part of the baseline. If, on application of the baseline, the Optimizer is unable to reproduce the desired plan, the baseline is rejected outright.

On the other hand, SQL Patches have been developed primarily to get the Optimizer to avoid a particular problem path in an execution plan, specifically to avoid failures due to certain access methods, join methods, etc.

Two parameters allow you to control SPM. The first, optimizer_capture_sql_plan_baselines, which is FALSE by default, allows you to automatically capture plans. The second parameter, optimizer_use_sql_plan_baselines, is TRUE by default. It allows the SPM aware optimizer to use the SQL plan baseline if available when compiling a SQL statement. If you set this parameter to FALSE, the SPM aware optimizer will be disabled and you will get the regular cost-based optimizer which will select the best plan based on estimated cost.

My Note

SQL Patch and/or SQL Profile can exist with SQL Baseline. They are different objects used for different things.

SQL run -> SQL Patch and/or Profile exist, so it is applied -> New plan is generated -> Baseline is working, so the plan is compared with the one inside the "Plan Baseline" -> If the plan exists and is accepted, it will be selected and the original plan will not run, otherwise it will be added to the plan history and the original will run (not optimized. Well in this case I tell that the orignial plan was not optimized).

Please note that SQL Patch and SQL Profile could exist together. My guess is that SQL Patch is applied before of  SQL Profile.

So the order should be something like this

SQL run -> SQL Patch exist, so it is applied -> Profile exist, so it is applied -> new plan is generated -> Baseline is working, so the plan is compared with the one inside the "Plan Baseline" -> If the plan exists and is accepted, it will be selected and the original plan will not run, otherwise it will be added to the plan history and the original will run (not optimized. Well in this case I tell that the orignial plan was not optimized).


This picture shows what I said



Reference

SQL Plan Management (Part 1 of 4) Creating SQL plan baselines (Maria Colgan)
SQL Plan Management (Part 2 of 4) Creating SQL plan baselines (Maria Colgan)
SQL Plan Management (Part 3 of 4) Creating SQL plan baselines (Maria Colgan)
SQL Plan Management (Part 4 of 4) Creating SQL plan baselines (Maria Colgan)
How do adaptive cursor sharing and SQL Plan Management interact? (Unknown)
Why are there more cursors in 11g for my query containing bind variables? (Unknown)
Oracle Optimizer and SPM plan interaction (Mohamed Houri)
Stage and Fix SQL Statement (Unknown)

dbms_sqldiag (Jonathan Lewis)

SQL Patch I (Dominic Brooks)
SQL Patch II (Dominic Brooks)
SQL Patch III (Dominic Brooks)
SQL Patch IV (Dominic Brooks)
Adding and Disabling Hints Using SQL Patch (Nigel Bayliss)
Using SQL Patch to add hints to a packaged application (Unknown)

Adaptive Cursors and SQL Plan Management (Arup Nanda)
How a Cursor becomes Bind Aware? (Carlo Sierra)
Adaptive Cursor Sharing with SQL Plan Baselines (Dominic Brooks)



martedì, giugno 20, 2017

On DB Time, DB CPU and Utilization

DB Time/DB CPU


The "Load Section" of an AWR Report has two interesting lines
  • DB Time(s)
  • DB CPU(s)

Load Profile

This "Load Profile" is an AWR snapshot of 1h (60m/3600s). Look at the definitions of those metrics.

  • DB Time is a time model statistic that is the sum of Oracle process CPU consumption and non-idle wait time
  • DB CPU is Oracle server/foreground/shadow process CPU consumption  

(the above definitions are taken from an Orapub's Blog). Both of them are divided by "Elapsed Time" so they are an average on the snapshot interval.

Because of the definition of "DB Time", the "DB Time(s)" represent the Average Active Session on just one second.

"DB CPU(s)" tell us how many CPU are used in one second. You can compare this value with the number of CPU on your system, in order to understand if you are CPU bound.

In my example (from production system), the database is doing.....nothing: 1 active session each 2 seconds and a half (0.4 in 1s + 0,4 in 1s + 0.2 in 1/2s).

Also each 1s only 0.3 CPUs are used on 120 CPUs of the system (see next picture).


Utilization

How you can see from this Orapub post, the utilization is calculated as
  • Utilization = Requirement / Capacity (a)
(a) capacity: the maximum amount that can be contained or accommodated

From our point of view

  • The requirement is the time used 
  • The capacity is the time available

so the previous formula became

  • Utilization = time used / time available

You can calculate the utilization using two different methods. These methods are taken from Craig Shallahamer's Video Lessons.
  1. Core method
  2. Busy:Idle method

In order to explain these methods, I post another picture from AWR report

Operating System Statistics

Core method

With this method, requirement and capacity are:

Requirement =>  (v$osstat.)BUSY_TIME
Capacity => #CPU * Elapsed (Time)

In AWR report, the "BUSY_TIME" came from V$OSSTAT, while "Elapsed" is the "Elapsed Time" of snapshot and #CPU is the number of CPU on the system (the NUM_CPU column of V$OSSTAT)

Remember to normalize the numbers

Utilization = BUSY_TIME / Elapsed = (3336707/100) / (120*60*60)
Utilization = 33367.07 / 432000
Utilization = 0,077
Utilization = 7,7%

Because BUSY_TIME is in hundredths of seconds, I had divided the BUSY_TIME by 100. Also, the "Elapsed Time" in AWR report is in minutes, I multiplicated it for 60.


Busy:Idle method

With this method, requirement and capacity are:

Requirement =>  (v$osstat.)BUSY_TIME
Capacity => (v$osstat.)BUSY_TIME + (v$osstat.)IDLE_TIME

In this case, all information comes from the "Operating System Statistics" section of AWR Report.

Utilization = BUSY_TIME / (BUSY_TIME + IDLE_TIME)
Utilization = 3336707 / (3336707 + 39244501)
Utilization = 3336707 / 42581208
Utilization = 0,078
Utilization = 7,8%

In this case, all measurement was in hundredths of seconds so no further operations are needed.

The low utilization is a confirm of the very low Average Active Section shown in firs part of this post.

Reference

* http://blog.orapub.com/20130228/how-many-cpu-cores-do-i-really-have-in-my-oracle-database-server.html
* http://www.toadworld.com/platforms/oracle/b/weblog/archive/2014/05/26/understanding-cpu-utilization-amp-cpu-capacity
* http://blog.orapub.com/20140805/what-is-oracle-db-time-db-cpu-wall-time-and-non-idle-wait-time.html
* https://docs.oracle.com/database/122/REFRN/V-OSSTAT.htm#REFRN30321

lunedì, giugno 12, 2017

SQL Profile: Case 3 - Part 1: Create/Execute/Report/Accept

Case 3: The statement is in AWR repository

In this scenario there are following steps:

(a) Find out the SQL_ID from DBA_HIST_SQLTEXT
(b) Check the execution plan
(c) Find out of the snapshot interval from DBA_HIST_SQLSTAT
(d) Create a task using the SQL_ID found in the previous step
(e) Run the task
(f) Show the recommendations
(g) Accept the recommendations

These steps are similar to the previous two (Case 1 and Case 2). The difference is that you have to look for the sql_id into DBA_HIST* tables.


Step a) Find out the SQL_ID into DBA_HIST_SQLTEXT table

set long 999999
set pages 99
SELECT 
      sql_id 
    , sql_text 
FROM 
      dba_hist_sqltext 
WHERE 
      regexp_like (SQL_TEXT, 'af_t_obj_spec','i') 
AND   regexp_like (SQL_TEXT, 'AF_T_OBJ t3','i');


SQL_ID
-------------
SQL_TEXT
--------------------------------------------------------------------------------
57fbbndr8f1r0
SELECT  AF_T_OBJ_SPEC.OBJ_SPEC_ID, AF_T_OBJ_SPEC.SHORT_NAME, AF_T_OBJ_SPEC.DESCR
IPTION, AF_T_OBJ_SPEC.DELETED, AF_T_OBJ_SPEC.LAST_VERSION, AF_T_OBJ_SPEC.OBJ_TYP
E_NAME, AF_T_OBJ_SPEC.LOCK_USER, AF_T_OBJ_SPEC.LOCK_LEASE_DATE, AF_T_OBJ_SPEC.LO
CK_SESSION, AF_T_OBJ_SPEC.LOCK_INSTANCE, AF_T_OBJ_SPEC.LIFECYCLE_UUID, AF_T_OBJ_
SPEC.INHERIT_PARENT_PRIVS, AF_T_OBJ_SPEC.OBJ_SPEC_UUID, AF_T_OBJ_SPEC.CREATED_BY
, AF_T_OBJ_SPEC.CREATED_DATE, AF_T_OBJ_SPEC.MODIFIED_BY, AF_T_OBJ_SPEC.MODIFIED_
DATE, AF_T_OBJ_SPEC.OWNER, AF_T_OBJ_SPEC.FEATURE_NAME, AF_T_OBJ_SPEC.CHECKSUM, A
F_T_OBJ_SPEC.INACTIVE, AF_T_OBJ_SPEC.SYSTEM_MANAGED, AF_T_OBJ_SPEC.FINAL_VERSION
, AF_T_OBJ_SPEC.SOURCE_ID, AF_T_OBJ.OBJ_ID, AF_T_OBJ.CHANGENUMBER, AF_T_OBJ.VERS
ION, AF_T_OBJ.INITIAL_COMMENT, AF_T_OBJ.STATUS_UUID, AF_T_OBJ.OBJ_UUID, AF_T_OBJ
.OBJ_SPEC_UUID AF_T_OBJ_OBJ_SPEC_UUID, AF_T_OBJ.CREATED_BY AF_T_OBJ_CREATED_BY,
AF_T_OBJ.CREATED_DATE AF_T_OBJ_CREATED_DATE, AF_T_OBJ.MODIFIED_BY AF_T_OBJ_MODIF
IED_BY, AF_T_OBJ.MODIFIED_DATE AF_T_OBJ_MODIFIED_DATE, AF_T_OBJ.OBJ_DEFINITION ,
 rownum FROM AF_T_OBJ_SPEC, AF_T_OBJ WHERE AF_T_OBJ_SPEC.OBJ_SPEC_UUID =AF_T_OBJ
.OBJ_SPEC_UUID  AND   AF_T_OBJ_SPEC.DELETED = 0   AND  AF_T_OBJ.VERSION = (SELEC
T MAX(VERSION) FROM AF_T_OBJ t3 where AF_T_OBJ.OBJ_SPEC_UUID = T3.OBJ_SPEC_UUID)
  AND  EXISTS (SELECT 0 FROM AF_L_LIFECYCLE_STATUS WHERE AF_L_LIFECYCLE_STATUS.S
TATUS_UUID = AF_T_OBJ.STATUS_UUID AND AF_L_LIFECYCLE_STATUS.LIFECYCLE_UUID = AF_
T_OBJ_SPEC.LIFECYCLE_UUID AND AF_L_LIFECYCLE_STATUS.STATUS_NAME  in (:1 ,:2 ))
AND  AF_T_OBJ.VERSION = (SELECT MAX(VERSION) FROM AF_T_OBJ t3 where AF_T_OBJ.OBJ
_SPEC_UUID = T3.OBJ_SPEC_UUID)  AND  AF_T_OBJ_SPEC.OBJ_TYPE_NAME in (:3 ) ORDER
BY  AF_T_OBJ_SPEC.DESCRIPTION



Step b) Check the execution plan

SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('&sqlid'));

Enter value for sqlid: 57fbbndr8f1r0
old   1: SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('&sqlid'))
new   1: SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('57fbbndr8f1r0'))


You can find the output here (the database I using, is a two node RAC and I run the commands on the first instance (*)). For this sql_id there are 4 execution plans (*):

Plan hash value: 333756168

Plan hash value: 934906073
Plan hash value: 2303425470
Plan hash value: 2773806998


If you want, at this point, you could choose one of the previous plan_hash_value and you could set it for the SQL Profile. 

But what I want is that SQL Advisor analyze the statement for me, so I go to the "Step c"


Step c) Find out a snapshot interval from DBA_HIST_SQLSTAT

set lines 210 
set pages 99 
SELECT min(snap_id), max(snap_id) FROM DBA_HIST_SQLSTAT WHERE sql_id ='&sqlid';

Enter value for sqlid: 57fbbndr8f1r0
old   1: SELECT min(snap_id), max(snap_id) FROM DBA_HIST_SQLSTAT WHERE sql_id ='&sqlid'
new   1: SELECT min(snap_id), max(snap_id) FROM DBA_HIST_SQLSTAT WHERE sql_id ='57fbbndr8f1r0'
MIN(SNAP_ID) MAX(SNAP_ID)
------------ ------------
        4944         5118

The details about sql_id into DBA_HIST_SQLSTAT are:

select 
      snap_id
    , sql_id
    , plan_hash_value 
from 
      dba_hist_sqlstat 
where 
      sql_id='57fbbndr8f1r0' 
order by 
      snap_id;

   SNAP_ID SQL_ID        PLAN_HASH_VALUE
---------- ------------- ---------------
      4944 57fbbndr8f1r0      2303425470
      4944 57fbbndr8f1r0       333756168
      4944 57fbbndr8f1r0      2773806998
      5117 57fbbndr8f1r0      2303425470
      5117 57fbbndr8f1r0       333756168
      5117 57fbbndr8f1r0      2773806998
      5118 57fbbndr8f1r0       333756168
      5118 57fbbndr8f1r0       934906073  <<< see (*) <<<
      5118 57fbbndr8f1r0      2773806998



Step d) Create the task, using the SQL_ID 


With these snap_id, I can create a task and get recommendations

DECLARE
   task_name VARCHAR2(30);
BEGIN
    task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
      begin_snap=> &s_snapid
    , end_snap=> &e_snapid
    , sql_id => '&sqlid'
    , scope => 'COMPREHENSIVE'
    , time_limit => 90 -- nr of seconds of analysis 
    , task_name => 'task_&sqlid'
    , description => 'Task to tune a query on a specified table');
END;
/

Enter value for s_snapid: 4944
old   5:       begin_snap=> &s_snapid
new   5:       begin_snap=> 4944
Enter value for e_snapid: 5118
old   6:     , end_snap=> &e_snapid
new   6:     , end_snap=> 5118+1
Enter value for sqlid: 57fbbndr8f1r0
old   7:     , sql_id => '&sqlid'
new   7:     , sql_id => '57fbbndr8f1r0'
old  10:     , task_name => 'task_&sqlid'
new  10:     , task_name => 'task_57fbbndr8f1r0'


In this case, I chose to analyze the whole range, but I could choose just a part of it.


Step e) Run the task in order to analyze the statement

BEGIN
    DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name =>'task_57fbbndr8f1r0');
END;
/


Step f) Show the recommendations

set long 40000
set longchunksize 1000
set linesize 210
set heading off
set pages 0
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('task_57fbbndr8f1r0') from DUAL;
set heading on
set pages 99


The output is here. The Advisor found 3 improvements:

1- SQL Profile Finding
2- Index Finding
3- Alternative Plan Finding (*)

Also, you could query the data dictionary to get information:

SELECT
    finding_id
  , message
FROM
    dba_advisor_findings 
WHERE
    task_name='task_57fbbndr8f1r0';

FINDING_ID MESSAGE
---------- -----------------------------------------------------------------------------------------------
         1 A potentially better execution plan was found for this statement.
         2 The execution plan of this statement can be improved by creating one or more indices.
         3 Some alternative execution plans for this statement were found by searching the system's 
           real-time and historical performance data.


For the 3th point, the "Alternative Plan Findings" see below notes.

Step g) Accept the recommendations 

If you want to accept the profile, then you have to run (see the log):

execute dbms_sqltune.accept_sql_profile( - 
    task_name =>'task_57fbbndr8f1r0' -
  , task_owner => 'SYS' -
  , replace => TRUE);


Step h) Optionally, you can drop the TASK

exec DBMS_SQLTUNE.DROP_TUNING_TASK ('&task_name')


Step i) Optionally, you can drop the SQL Profile


exec DBMS_SQLTUNE.DROP_SQL_PROFILE ('&sqlprofile_name')


(*) Notes

From AWR repository (DBA_HIST_SQLSTAT), you can see that there are 4 palns hash value for the 57fbbndr8f1r0 statement. They are

333756168
934906073
2303425470
2773806998

Anyway, SQL Advisor, in its analysis, doesn't consider the plan value nr 934906073. In the log, in the "Alternative Plan Finding" section, in fact, you see:

3- Alternative Plan Finding
---------------------------
  Some alternative execution plans for this statement were found by searching the system's real-time and historical performance data.

  The following table lists these plans ranked by their average elapsed time.
  See section "ALTERNATIVE PLANS SECTION" for detailed information on each plan.

  id plan hash  last seen            elapsed (s)  origin          note
  -- ---------- -------------------- ------------ --------------- ----------------
   1 2303425470  2017-05-09/16:41:30        0.096 Cursor Cache
   2  333756168  2017-05-09/16:41:30        0.122 Cursor Cache
   3 2773806998  2017-05-09/16:41:29        0.132 Cursor Cache

All statements are taken from the memory (Cursor Cache). This is why the plan 934906073 is not found: it is in the second instance:

select 
      inst_id
    , plan_hash_value
from 
      gv$sql 
where 
      sql_id='57fbbndr8f1r0' 
order by inst_id;

   INST_ID PLAN_HASH_VALUE
---------- ---------------
         1       333756168
         1      2303425470
         1      2773806998
         2       333756168
         2       934906073 <<< instance nr 2
         2      2773806998

If you want to find suggestions about plan hash value 934906073, you have to run DBMS_SQLTUNE.EXECUTE_TUNING_TASK, (Step e), on the second instance. Here is the log. This is the section about "Alternative Plan Finding":

  id plan hash  last seen            elapsed (s)  origin          note
  -- ---------- -------------------- ------------ --------------- ----------------
   1 2773806998  2017-05-09/17:06:12        0.064 Cursor Cache
   2  934906073  2017-05-09/17:06:12        0.120 Cursor Cache <<< here
   3  333756168  2017-05-09/17:01:38        0.123 Cursor Cache

Well. This is how "SQL Tuning Advisor" works. I opened a SR for this and the support told me:

If plans are cached in memory the STA Tuning Task will take the info from cursor cache for performance.

So. The behavior of SQL Tuning Advisor, if used against AWR repository is:

* Providing SQL Profile suggestion: "Finding Section" (1st finding)
* Providing suggestion with index: "Index Finding" (2nd finding)
* Providing alternative solutions: "Alternative Plan Finding" (3rd finding)

About the last point (the 3rd), the alternative solutions are took form memory.

lunedì, maggio 08, 2017

Recovering Table, Part 4 - Note

There are two additional pieces of information I want to give you.
  1. The directories created, in order to restore the auxiliary database
  2. The configuration of rman script

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }


  • The directories that rman use for the "core" files
Into directory specified by "auxiliary destination" parameter, rman will create a directory named as the same name of the production database. In my case it was PCMSP. Here will be restored the "core files"

Under this directory, there will be 3 directory:



  • one for controlfile (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/controlfile/)
  • one for datafile (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/)
  • one for redlog online (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/)
The archived log, will be restored under PCMSP directory (the same level fo controlfile/datafile/onlinelog directory)

The list of "core" datafile. The archived logs are not shown, here.

  • The directories that rman use for the "application" files
At the same level of the directory where rman will restore the core files (auxiliary destination=/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp) will be created a new directory (DSEK_PITR_PCMSP in my case) where the "application" datafiles will go.

core files => /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/
application files => /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DSEK_PITR_PCMSP/

where

  • DSEK is a prefix put by rman
  • PITR is for Point In Time to Recovery
  • PCMSP is the name fo database

The "core" directory (PCMS) and "application" directory (DESK_PITR_PCMSP), are created at the same level. Also the archived log will be restored to the same level. Here are shown the archived log needed for the recovery of "application" tablespace
Summarizing what I said, the tree of directy is:

<auxiliary destination>
     |
     -> archived log for "core" part
     -> archived log for "application" part
     -<db_name>
            |
            -> controlfile dir
                  |
                  -> controlfile
            -> onlinelog dir
                  |
                  -> redolog file online
            -> datafile dir
                  |
                  -> SYSTEM tbs
                  -> SYSAUX tbs
                  -> TEMP tbs
                  -> all UNDO tbs
     -<pitr_db_name>
            |
            -> datafile for "application" part


  • Channel configuration
Another important thing to show is the script I used. How you can see, I'm not using the allocation of a channel. In my case, the backup is on tapes. If you try to allocate a channel, like this (I use Legato Networker. This is why all env parameters starts with "NSR_")

run {
2> ALLOCATE CHANNEL c1 TYPE 'SBT_TAPE'
PARMS 'ENV=(NSR_SERVER=bck_server, NSR_CLIENT=bck_client, NSR_GROUP=db_group, NSR_DATA_VOLUME_POOL=db_pool_1M)'; 
3>
4> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
5> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
6> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
7> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
8> }


then you get following error:

RMAN06034: at least 1 channel must be allocated to execute this command

The cause and solution are explained in this metalink note: "Recover Table Fails with RMAN06034:
at least 1 channel must be allocated to execute this command (Doc ID 2105325.1)":


You cannot manually allocate a channel when RMAN will implicitly create the auxiliary instance, the channel specification must be configured in the target database. This is done with:

configure channel sbt_tape FORMAT '%U' PARMS='ENV=(NB_ORA_CLIENT=......, NB_ORA_POLICY=............., NB_ORA_SCHED=.....)';


So, I have to connect to the target and to the catalog/controlfile and run the "configure channel" command:

$rman
RMAN> connect target /
RMAN> configure channel sbt_tape FORMAT '%U' PARMS='ENV=(NLS_CLIENT=......, NLS_SERVER=.....)';

Using Legato Networker, the only two parameters that you have to use for the restore are NSR_CLIENT and NSR_SERVER. The tape number is saved into catalog/controlfile.


  • Other recovery scenarios

As told in Part 1, there are several cases to recover table. I report two example from the manual pages.

The first one. The recover generate just a dump file. The import will not be done.

RECOVER TABLE SCOTT.EMP, SCOTT.DEPT
    UNTIL TIME 'SYSDATE-1'
    AUXILIARY DESTINATION '/tmp/oracle/recover'
    DATAPUMP DESTINATION '/tmp/recover/dumpfiles'
    DUMP FILE 'emp_dept_exp_dump.dat'
    NOTABLEIMPORT;

The second one. For the PITR, the recover will use the UNTIL SEQUENCE and it'll remap also the tablespace.

RECOVER TABLE SH.SALES:SALES_1998, SH.SALES:SALES_1999
    UNTIL SEQUENCE 354
    AUXILIARY DESTINATION '/tmp/oracle/recover'
    REMAP TABLE 'SH'.'SALES':'SALES_1998':'HISTORIC_SALES_1998',
              'SH'.'SALES':'SALES_1999':'HISTORIC_SALES_1999' 
    REMAP TABLESPACE 'SALES_TS':'SALES_PRE_2000_TS';



Recovering Table, Part 1 - Introduction
Recovering Table, Part 2 - The "Core" part
Recovering Table, Part 3 - The "Application" part

Recovering Table, Part 3 - The "Application" part

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }

The recovery table phase is split into 2 logical steps:

  1. Restore the core part
  2. Restore the application part


  • Restore the "Application" part

During this step, RMAN will restore:

  • Tablespaces where the table is located
  • Archived log
Transparently rman will run following scripts:

##########################
Restore the "application" tablespace
##########################

In my case, there is just a tablespace

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  13 to new;
set newname for datafile  14 to new;
set newname for datafile  17 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  13, 14, 17;

switch clone datafile all;
}

where

00013 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf
00014 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf
00017 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf


###########################
Recover the "application" tablespace
###########################

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  13 online";
sql clone "alter database datafile  14 online";
sql clone "alter database datafile  17 online";
# recover and open resetlogs
recover clone database tablespace  "PCMS_DATA", "SYSTEM", "UNDOTBS1", "UNDOTBS2", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

How you can see into log file, also in this case, it is necessary to restore the Archived Log.

##########################
Creation of expdp/impdp directory
##########################

{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp''";
}

#############
expdp of the table
#############

After the Database recovery, it start the table export...:

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_DgAf_BngC":  
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 7 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "PCMS_ADM"."PIANIFICAZIONE_MANUTENZIONE"    5.110 MB   34394 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_DgAf_BngC" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_DgAf_BngC is:
   EXPDP>   /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/tspitr_DgAf_81953.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_DgAf_BngC" successfully completed at Thu May 4 15:51:03 2017 elapsed 0 00:00:29
Export completed


{
# shutdown clone before import
shutdown clone abort
}

###########
impdp of table
###########

...and the table import:

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_DgAf_gvec" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_DgAf_gvec":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "PCMS_ADM"."PIANIFICAZIONE_MAN_BCK"         5.110 MB   34394 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_DgAf_gvec" successfully completed at Thu May 4 15:51:14 2017 elapsed 0 00:00:04
Import completed

How you can see, after the expdp command is completed, because the auxilyary database it's not necessary anymore. it will be closed,


#########
The last step
#########

After importing the table, all restored files ("core" filse, "application" files and archived logs), will be deleted.

auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_temp_djpckr57_.tmp deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_temp_djpckr70_.tmp deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_8_djpddy16_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_6_djpddo9z_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_4_djpdddof_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_2_djpdd443_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_7_djpddsr2_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_5_djpddk23_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_3_djpdd893_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_1_djpdczsp_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpcndlt_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpd013l_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpcop6k_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_djpcc7lb_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_djpc4pyw_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_djpcbo4t_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_djpc9bnh_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_djpc8t8l_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_djpc7rts_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_djpc79bp_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_djpc65fy_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/controlfile/o1_mf_djpc35l1_.ctl deleted
auxiliary instance file tspitr_DgAf_81953.dmp deleted

Recovering Table, Part 1 - The "Core" part
Recovering Table, Part 2 - The "Application" part
Recovering Table, Part 4 - Note