Infolinks

Monday 16 July 2012

TKPROF-Peformance Tunning(Example1,2,3)-4

Hi All,Quite often we come across tunning the queires...This is the area where most people find it difficult to understand ..The blind rule every one says check the explain plan and try to remove the full table scans...most of times it works to a extent..
For really tunning to the best u need to understand what oracle is doing to the data you require or perform the transactions you have issued..
For that purpose oracle has give sqltrace utility which will help you to figure out what all operations are getting performed and the way and time taken by each of them
First and foremost we will see what all setups required(mostly done by DBA..but nice to know things)
lets try to answer some simple questions
1.At what level we can enable the trace??
Both at the system level and session level
2.How to know the sessions and enable the trace?
select username, sid, serial#, paddr from v$session where username='APPS'
USERNAME SID SERIAL# PADDR------------------------------ --------- --------- --------APPS 372 313 4308F6F8APPS 373 27 43092A70APPS 375 36 430924B8APPS 376 184 4308DA60APPS 377 334 4308FCB0APPS 378 102 4308A6E8APPS 380 24 43091948APPS 381 12823 4308F140APPS 382 5297 4308EB88
To enable the trace for any one of the sessions
EXECUTE DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(, , TRUE) EXECUTE DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(382, 5297, TRUE)
3.Where are the trace files getting stored??
you find the trace files into /admin//udump.
Normally trace file are difficult to handle..so we use the tkprof utility
tkprof tracefile.trc t1 RECORD=sqlfile.sql
in other case they are stored in the directory specified in user_dump_dest parameter in the init.ora file

4.What does SQLTrace utility provides??
The diagnostic tool 'sql trace' provides performance information aboutindividual SQL statements and generates the following statistics foreach statement:
* parse, execute, and fetch counts
* CPU and elapsed times
* physical reads and logical reads
* number of rows processed
* misses on the library cache
This information is input to a trace (.trc) file and sql trace can beenabled/disabled for a session or an instance
5.What all setups need to be done for enabling trace in a instance?? The following parameters need to be set up in the "init.ora" filefor the particular instance (SID) that you wish to use SQL Trace:
* SQL_TRACE + Enable/Disable SQL Trace for the instance.
Values -- TRUE Enable statistics to be collected for all sessions.
FALSE Disable statistics to be collected for all sessions.
* TIMED_STATISTICS + Enable/Disable the collection of timed statistics, such as CPU and elapsed times.
Values ------ TRUE Enable timing (we usually recommend this)
FALSE Default value.

* MAX_DUMP_FILE_SIZE : + Specifies the maximum size of trace files operating system blocks.
Values ------ The default value for this is 500 but if your trace file is truncated then increase this value.
* USER_DUMP_DEST: Specifies the destination for the trace file.
Values : The default value for this parameter is the default destination for system dumps on your operating system.
6.How to enable the trace for a session?
ALTER SESSION SET SQL_TRACE = TRUE;
ALTER SESSION SET SQL_TRACE = FALSE;
For timed statistics
ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
ALTER SYSTEM SET TIMED_STATISTICS = FALSE;

7.how to set SQL Trace Facility for an Instance?If the initialization parameter SQL_TRACE=TRUE, then statisticswill be gathered for ALL sessions. If the facility has been enabled forthe instance, it may be disabled for an individual session by issuing bythe above SQL statement

================

Hi All,
Now we will see how to read the trace files..It is very difficult to understand from the trace files so we use the TKPROF utility to make the trace files in a better readable format..
This is one of the documents i had which i collected from metalink..a good one with clear examples...
If you have a system that is performing badly, a good way to identify problem SQL statements is to trace a typical user session and then use TkProfto format the output using the sort functions on the tkprof command line.
There are a huge number of sort options that can be accessed by simply typing 'TkProf' at the command prompt.
A useful starting point is the 'fchela' sort option which orders the output by elapsed time fetching (rememberthat timing information is only available with timed_statistics set to true in the "init.ora" file).
The resultant .prf file will contain the most time consuming SQL statement at the start of the file.

Another useful parameter is sys.
This can be used to prevent SQL statements run as user SYS from being displayed. This can make the output file much shorter an easier to manage.
Remember to always set the TIMED_STATISTICS parameter to TRUE when tracingsessions as otherwise no time based comparisons can be made.
A typical TKPROF out look like for select,insert,update,delete..
call count cpu elapsed disk query current rows
--------- ------------------------------------------------------------------------------------
Parse 2 221 329 0 45 0 0
Execute 3 9 17 0 0 0 0
Fetch 3 6 8 0 4 0 1
------------------------------------------------------------------------------- --------------
Let see the definition for each and every column
Interpreting TkProf Output Guidelines
call : Statisics for each cursor's activity are divided in to 3 areas:
Parse: statisitics from parsing the cursor.
This includes information for plan generation etc.
Execute: statisitics for the exection phase of a cursor
Fetch : statistics for actually fetching the rows

count : number of times we have performed a particular activity on this particular cursor
cpu: cpu time used by this cursor
elapsed: elapsed time for this cursor
disk: This indicates the number of blocks read from disk.
Generally you want to see blocks being read from the buffer cache rather than disk.
query : This column is incremented if a buffer is read in Consistent mode.
A Consistent mode buffer is one that has been generated to give a consistent read snapshot for a long running transaction.
The buffer actually contains this status in its header.
current: This column is incremented if a buffer found in the buffer cache that is new enough for the current transaction and is in current mode (and it is not a CR buffer).
This applies to buffers that have been read in to the cache as well as buffers that already exist in the cache in current mode.
rows: Rows retrieved by this step
What is my first step while tunning the performance of a query ?
It is better to use autotrace feature of SQL*Plus be used on statements rather than using TkProf mainly because the TkProf output can be confusing with regard to whether the Rule or Cost Based optimizer has been used. Because TkProf explain plan does not show any costs or statistics,
it is sometimes not possible to tell definitively which optimizer has been used.
That said, the following output from Tkprof explain plan is useful.
The Rows column next to the explain plan output shows the number of rows processed by that particular step.
The information is gathered from the STAT lines for each cursor in the raw trace output.
Remember that if the cursor is not closed then you will not see any output.
Setting SQL_TRACE to false DOES NOT close PL/SQL child cursors.
Cursors are closed in SQL*Plus immediately after execution.
Let's see some tkprof examples ..in the next post.. 

=================

STEP 2 - Examine statements using high resourcee
===============================================
update ...
where ...
call------ count ---- cpu --- elapsed --- disk ----query --- current ----rows --
Parse-----1--------- 7 -----122 -------- -0----------0----------0-------------0---
Execute---1------- 75------ 461-------- 5----- [H] 297 -----[I] 3-------- [J]1---
Fetch ---- 0-------- 0------0-------------0--------0-------- 0------------ 0 ---
[H] shows that this query is visiting 297 blocks to find the rows
to update[I] shows that only 3 blocks are visited performing
the update[J] shows that only 1 row is updated.
297 block to update 1 rows is a lot. Possibly there is an index missing?


STEP 3 - Look for over parsing
select ...
call--------count------cpu-------elapsed------disk----query----current---- rows--
--------------------------------------------------------------------------------------
Parse------[M] 2 ----- [N] 221--- 329---------0-------45---------0--------- 0--
Execute---- [O] 3----- [P]9-------17----------0--------0--------- 0----------0--
Fetch------- 3----------6----------8-----------0-------[L] 4-------0-------- [K] 1
Misses in library cache during parse: 2 [Q]

[K] is shows that the query has returned 1 row.
[L] shows that we had to read 4 blocks to get this row back.This is fine.
[M] show that we are parsing the statement twice - this is not desirable especially as the cpu usage is high [N] in comparison to the execute
figures : [O] & [P]. [Q] shows that these parses are hard parses.

If [Q] was 1 then the statemnent would have had 1 hard parse followed by a soft parse (which just looks up the already parsed detail in the library cache).
This is not a particularly bad example since the query has only been executed a few times.

However excessive parsing should be avoided as far as possible by:
o Ensuring that code is shared:
- use bind variables - make shared pool large enough to hold query definitions in memory long enough to be reused.

TKPROF-Peformance tunning(Example1)-3

Examples:
Step 1 - Look at the totals at the end of the tkprof output===========================================================
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
-----------------------------------------------------------------
Parse [A] 7 1.87 4.53 385 [G] 553 22 0
Execute [E] 7 0.03 0.11 [P] 0 [C] 0 [D] 0 [F] 0
Fetch [E] 6 1.39 4.21 [P] 128 [C] 820 [D] 3 [F] 20
--------------------------------------------------------------------------
Misses in library cache during parse: 5
Misses in library cache during execute: 1
8 user SQL statements in session.
12 internal SQL statements in session
.[B] 54 SQL statements in session.
3 statements EXPLAINed in this session.
1. Compare [A] & [B] to spot over parsing.
In this case we have 7 parses for 54 statements which is ok.
2. You can use [P], [C] & [D] to determine the hit ratio.
Hit Ratio is logical reads/physical reads:
Logical Reads = Consistent Gets + DB Block Gets
Logical Reads = query + current
Logical Reads = Sum[C] + Sum[D]
Logical Reads = 0+820 + 0+3
Logical Reads = 820 + 3
Logical Reads = 823
Hit Ratio = 1 - (Physical Reads / Logical Reads)
Hit Ratio = 1 - (Sum[P] / Logical Reads)
Hit Ratio = 1 - (128 / 823)
Hit Ratio = 1 - (0.16)
Hit Ratio = 0.84 or 84%
3. We want fetches to be less than the number of rows as this will mean we have done less work (array fetching).
To see this we can compare [E] and [F].
[E] = 6 = Number of Fetches[F] = 20 = Number of Rows
So we are doing 6 fetches to retrieve 20 rows - not too bad.
If arrayfetching was configured then rows could be retrieved with less fetches.
Remember that an extra fetch will be done at the end to check thatthe end of fetch has been reached.
4. [G] Shows reads on the Dictionary cache for the statements.
- this should not be a problem on Oracle7.
In this case we have done 553 reads from the Library cache.

No comments:

Post a Comment