SQL TUINING
SQL TUINING
Avoid Data type Mismatch
========================
Avoid
data type mismatch for index columns. If there are datatype mismatches
then the index on that particular column will not be used and a FULL
TABLE SCAN will be made.
Example: Order_Number is a number column in OE_ORDER_DETAILS_F table.
But in the following:
SELECT description
FROM oe_order_details_f
WHERE order_number=’115126’
The where clause will be executed as to_char(order_number) = ‘115126’
Avoid Functions on Indexed Columns
==================================
Avoid functions on Index columns. In this case too, the presence of a function will make the optimizer go for a FULL TABLE SCAN.
Example:
If Schedule_date is indexed in OE_ORDER_DETAILS_F table the where clause
To_char(schedule_date,’dd-mon’yyyy’) = ‘01-Jan-2003’
will prevent the usage of the index.
Define Initial Columns in Composite Indexes
===========================================
When
using a composite index, ensure that the initial columns are defined.
If the initial columns are not used, the composite index will not be
made use of.
Example:
Consider a table in which brand and product columns have a composite index. And we execute the following query.
Select count(*)
From products
Where price < 1000;
The composite index will not be made use of. But the same composite index will be hit for the query given below:
Select count(*)
From products
Where brand = ‘Arrow’;
It’s
important to ensure that the initial columns are defined while making
use of composite indexes. The same index can be hit by using the
following code:
Select count(*)
From products
Where brand > ‘0’
and price < 1000;
Use WHERE Instead of HAVING
===========================
It’s more efficient to use where clause instead of having clause.
Example:
Consider this query
Select brand, sum(price)
From products
Group by brand
Having brand = ‘Arrow’;
The above query can be re-written as shown below. This will ensure that the index, if present, is made use of.
Select brand, sum(price)
From products
Where brand = ‘Arrow’
Group by brand;
Avoid Nested Queries
====================
Nested queries need to be rewritten using joins, whenever possible, for efficiency.
Using Driving Table in RULE Based Optimizer
===========================================
Ensure that the smallest table appears last when specifying a join query if you are using a RULE based optimizer.
'Not exists' and Outer Joins
============================
Replace “not in” by “not exists” or “outer join”. This will ensure that indexes, if present, are hit.
Avoid NOT EQUAL
===============
Replace '!=' by 'union' or '<' and '>'
Use Function Based Index for NVL Columns
========================================
Make use of Function Based indexes in case NVL has to be avoided.
Example:
Create index idx_test on test(nvl(column_1,0));
Reduce the Number of Trips to the Database
==========================================
Every
time an SQL statement is executed, ORACLE needs to perform some
internal processing, namely, the statement needs to be parsed, indexes
evaluated, variables bound and data blocks read. The more you can reduce
the number of database accesses, the more overheads you can save.
Example:
There are 3 distinct ways of retrieving data about employees who have employee numbers 0342 or 0291.
Method 1 (Least Efficient):
SELECT EMP_NAME, SALARY, GRADE
FROM EMP
WHERE EMP_NO = 0342;
SELECT EMP_NAME, SALARY, GRADE
FROM EMP
WHERE EMP_NO = 0291;
Method 2 (Next Most Efficient):
DECLARE
CURSOR C1(E_NO NUMBER) IS
SELECT EMP_NAME, SALARY, GRADE
FROM EMP
WHERE EMP_NO = E_NO;
BEGIN
OPEN C1(342);
FETCH C1 INTO …, …, …;
.
.
OPEN C1(291);
FETCH C1 INTO …, …, …;
CLOSE C1;
END;
Method 3 (Most Efficient):
SELECT A.EMP_NAME, A.SALARY, A.GRADE,
B.EMP_NAME, B.SALARY, B.GRADE,
FROM EMP A,
EMP B
WHERE A.EMP_NO = 0342
AND B.EMP_NO = 0291;
It’s advisable to make use of Stored Procedures wherever possible.
Use TRUNCATE Instead of DELETE
==============================
When
rows are removed from a table, using DELETE, the rollback segments are
used to hold undo information. If you do not commit your transaction and
require the deleted rows, Oracle restores the data to the state it was
in before your transaction started from these rollback segments.
With
TRUNCATE, no undo information is generated. Once the table is
truncated, the data cannot be recovered (no rollback is possible). Hence
is faster and needs fewer resources.
Use
TRUNCATE rather than DELETE for deleting rows of small or large tables
when you are sure you do not need the undo information.
Use DECODE
==========
The DECODE statement provides a way to avoid having to scan the same rows repetitively or to join the same table repetitively.
Example:
SELECT COUNT(*), SUM(SAL)
FROM EMP
WHERE DEPT_NO = 0020
AND ENAME LIKE ‘SMITH%’;
SELECT COUNT(*), SUM(SAL)
FROM EMP
WHERE DEPT_NO = 0030
AND ENAME LIKE ‘SMITH%’;
You can achieve the same result much more efficiently using DECODE:
SELECT COUNT(DECODE(DEPT_NO, 0020, ‘X’, NULL)) D0020_COUNT,
COUNT(DECODE(DEPT_NO, 0030, ‘X’, NULL)) D0030_COUNT,
SUM(DECODE(DEPT_NO, 0020, SAL, NULL)) D0020_SAL,
SUM(DECODE(DEPT_NO, 0030, SAL, NULL)) D0030_SAL
FROM EMP
WHERE ENAME LIKE ‘SMITH%’;
Similarly, DECODE can be used in GROUP BY or ORDER BY clause effectively.
In Oracle 8i and above CASE statement can also be used instead of DECODE - its more readable and flexible.
Example:
SELECT
case when instr(UPPER(nvl(approver,'Chirag')),'CHIRAG') > 0 then 'Chirag'
when instr(UPPER(nvl(approver,'Chirag')),'SHARMILA') > 0 then 'Sharmila'
when instr(UPPER(nvl(approver,'Chirag')),'NAGARAJAN') > 0 then 'Nagarajan'
when instr(UPPER(nvl(approver,'Chirag')),'NAGARANJAN') > 0 then 'Nagarajan'
else approver
end,
count(*) No_Of_Patches
FROM
apps.patch_format
where
substr(to_char(CREATION_DATE,'dd-mon-yyyy'),4,8)=
substr(to_char(trunc(sysdate),'dd-mon-yyyy'),4,8)
group by
--FISCAL_WEEK,
case when instr(UPPER(nvl(approver,'Chirag')),'CHIRAG') > 0 then 'Chirag'
when instr(UPPER(nvl(approver,'Chirag')),'SHARMILA') > 0 then 'Sharmila'
when instr(UPPER(nvl(approver,'Chirag')),'NAGARAJAN') > 0 then 'Nagarajan'
when instr(UPPER(nvl(approver,'Chirag')),'NAGARANJAN') > 0 then 'Nagarajan'
else approver
end
--order by fiscal_week
/
Use Count(*) to Count Rows from Table
=====================================
Contrary
to popular belief, COUNT(*) is faster than COUNT(1). If the rows are
being returned via an index, counting the indexed column is faster
still.
Example:
Select COUNT(EMPNO) from emp;
Use UNION-ALL Instead of UNION (Where Possible)
===============================================
When
the query performs a UNION of the results from two queries, the result
sets are merged via the UNION-ALL operation, and then it is processed by
a SORT UNIQUE operation before the records are returned to the user.
If
the query had used a UNION-ALL function in place of UNION, then the
SORT UNIQUE operation would not have been necessary, thus improving the
performance of the query.
Example:
Least Efficient:
SELECT ACCT_NUM, BALANCE_AMT
FROM DEBIT_TRANSACTIONS
WHERE TRAN_DATE = ‘31-DEC-95’
UNION
SELECT ACCT_NUM, BALANCE_AMT
FROM CREDIT_TRANSACTIONS
WHERE TRAN_DATE = ‘31-DEC-95’
Most Efficient :
SELECT ACCT_NUM, BALANCE_AMT
FROM DEBIT_TRANSACTIONS
WHERE TRAN_DATE = ‘31-DEC-95’
UNION ALL
SELECT ACCT_NUM, BALANCE_AMT
FROM CREDIT_TRANSACTIONS
WHERE TRAN_DATE = ‘31-DEC-95’
Use WHERE Instead of ORDER BY Clause
====================================
ORDER BY clauses use an index only if they meet two requirements.
• All the columns that make up the ORDER BY clause must be contained within a single index in the same sequence.
•
All the columns that make up the ORDER BY clause must be defined as NOT
NULL within the table definition. Remember, null values are not
contained within an index.
WHERE clause indexes and ORDER BY indexes cannot be used in parallel.
Example:
Consider a table DEPT with the following fields:
DEPT_CODE PK NOT NULL
DEPT_DESC NOT NULL
DEPT_TYPE NULL
NON UNIQUE INDEX (DEPT_TYPE)
Least Efficient: (Here, index will not be used)
SELECT DEPT_CODE
FROM DEPT
ORDER BY DEPT_TYPE
Explain Plan:
SORT ORDER BY
TABLE ACCESS FULL
Most Efficient: (Here, index will be used)
SELECT DEPT_CODE
FROM DEPT
WHERE DEPT_TYPE > 0
Explain Plan:
TABLE ACCESS BY ROWID ON EMP
INDEX RANGE SCAN ON DEPT_IDX
Use IN Instead of OR
====================
The following query can be replaced using IN to improve the performance:
Least Efficient:
SELECT . . .
FROM LOCATION
WHERE LOC_ID = 10
OR LOC_ID = 20
OR LOC_ID = 30
Most Efficient:
SELECT . . .
FROM LOCATION
WHERE LOC_IN IN (10,20,30)
Use Full Table Scans
====================
When index scan performs more block visitations than a full table scan, better to use full table scans
When to go for Fast Full Scans
==============================
Index
Fast full scans are an alternative to full table scans when the index
contains all the columns that are needed for the query. This can be used
by using Optimizer Hint INDEX_FFS.
Bitmap Indexes
==============
Consider
bitmap indexes when where clause predicate contain low-cardinality
columns, contain logical operations such as OR, AND or NOT on those
columns. It is advisable that bitmap indexes are not used in OLTP
applications.
Using composite indexes
=======================
Make use of composite indexes. These need to be ordered in the decreasing order of selectivity.
Create Indexes on Foreign Key
=============================
Create indexes on foreign key columns if the queries always retrieve master-detail relationship-based rows.
PL/SQL Tuning Guidelines
========================
Use Bind Variables
==================
Making use of bind variables will reduce frequent parsing.
Avoid Unnecessary Database Calls
================================
Avoid
Selects statements against the database when the same functionality can
be achieved in PL/SQL by just assigning to the variables. System
variables like UID, SYSDATE can be assigned to variables without Select
statements.
We can make use of assignments instead of making an SQL statement.
Example:
Declare
From_date date;
begin
Select sysdate into from_date from dual;
End;
Can be replaced by
Declare
From_date
begin
From_date:=sysdate
End;
Use of PLS_INTEGER
==================
Make
use of PLS_INTEGER instead of NUMBER or INTEGER in loop variables as
PLS_INTEGER makes use of machine arithmetic whereas NUMBER and INTEGER
makes use of functional arithmetic. Hence PLS_INTEGER is faster.
Avoid BINARY_INTEGER
====================
The
use of BINARY_INTEGER datatypes within arithmetic operations makes it
slower than the same operation using an INTEGER datatype.
Pinning Stored Procedures
=========================
Anonymous
PL/SQL should be moved into a stored object when possible and this
object should then be pinned in the shared pool using
dbms_shared_pool.keep() if the object is frequently referenced.
Use WHEN Clause in Triggers
===========================
Use a WHEN clause on the trigger if possible so that the trigger is fired only when absolutely necessary.
Use AFTER Row Triggers
======================
Use AFTER row triggers instead of BEFORE row triggers, wherever possible, as AFTER row triggers are faster.
Use Bulk Binding
================
Bulk
binds improve performance by minimizing the number of context switches
between PL/SQL and SQL engines while they pass an entire collection of
elements (varray, nested tables, index-by table, or host array) as bind
variables back and forth. Make use of BULK COLLECT and FORALL. When
there is a larger data volume and you need to fetch data for some
validations and/or processing, it is generally better (performance-wise)
to use the BULK constructs.
Use LIMIT clause
================
Using LIMIT clause during BULK COLLECT will reduce CPU utilization.
Example:
set serveroutput on
declare
TYPE tstrings IS TABLE OF string(255) INDEX BY BINARY_INTEGER;
type tnumbers IS TABLE OF FLOAT INDEX BY BINARY_INTEGER;
n tstrings;
cursor c is select object_name from user_objects;
bulk_limit number;
begin
open c;
-- first bulk 10
dbms_output.put_line('-- first bulk 10');
bulk_limit := 10;
fetch c bulk collect into n LIMIT bulk_limit;
for i in n.first..n.last loop
dbms_output.put_line(n(i));
end loop;
-- second bulk 15
dbms_output.put_line('-- second bulk 15');
bulk_limit := 15;
fetch c bulk collect into n LIMIT bulk_limit;
for i in n.first..n.last loop
dbms_output.put_line(n(i));
end loop;
close c;
end;
/
More Examples of Bulk Binding:
==============================
CURSOR dis_cust_customer_id_cur
IS
SELECT /*+ ORDERED USE_NL(dm dc) INDEX(dc SO_DISCOUNT_CUSTOMERS_N3) INDEX(dm QP_DISCOUNT_MAPPING_N1) */
dm.new_list_header_id,
dc.customer_id,
--dc.site_use_id,
--dc.customer_class_code,
dc.context,
dc.start_date_active,
dc.end_date_active
FROM ( select distinct old_discount_id, new_list_header_id
from qp_discount_mapping) dm,
so_discount_customers dc
WHERE dm.old_discount_id = dc.discount_id
AND dc.customer_id is not null;
LOOP
NEW_LIST_HEADER_ID_T.DELETE;
CUSTOMER_ID_T.DELETE;
SITE_USE_ID_T.DELETE;
CUSTOMER_CLASS_CODE_T.DELETE;
CONTEXT_T.DELETE;
START_DATE_ACTIVE_T.DELETE;
END_DATE_ACTIVE_T.DELETE;
FETCH dis_cust_customer_id_cur BULK COLLECT
INTO
NEW_LIST_HEADER_ID_T,
CUSTOMER_ID_T,
--SITE_USE_ID_T,
--CUSTOMER_CLASS_CODE_T,
CONTEXT_T,
START_DATE_ACTIVE_T,
END_DATE_ACTIVE_T;
LIMIT 1000; --- Fetch 1000 records at a time
BEGIN
IF new_list_header_id_t.FIRST is not null then
FORALL K IN new_list_header_id_t.FIRST..new_list_header_id_t.LAST
UPDATE /*+ index(QP_QUALIFIERS QP_QUALIFIERS_N7) */ QP_QUALIFIERS
SET CONTEXT = CONTEXT_T(K),
START_DATE_ACTIVE = START_DATE_ACTIVE_T(K),
END_DATE_ACTIVE = END_DATE_ACTIVE_T(K)
WHERE QUALIFIER_CONTEXT = 'CUSTOMER'
AND QUALIFIER_ATTRIBUTE = 'QUALIFIER_ATTRIBUTE2'
AND QUALIFIER_ATTR_VALUE = CUSTOMER_ID_T(K)
AND LIST_HEADER_ID = NEW_LIST_HEADER_ID_T(K);
END IF;
EXIT WHEN dis_cust_customer_id_cur%NOTFOUND;
EXCEPTION
WHEN VALUE_ERROR THEN
null;
WHEN INVALID_NUMBER THEN
null;
WHEN OTHERS THEN
v_errortext := SUBSTR(SQLERRM, 1,240);
--oe_debug.add('In others of dis_customers_cur_customer_id');
K := sql%rowcount + new_list_header_id_t.first;
ROLLBACK;
qp_util.log_error(new_list_header_id_t(K),NULL,
NULL, NULL, NULL, NULL, NULL,NULL, 'QP_QUALIFIERS', v_errortext,
'UPDATE_QUALIFIERS');
RAISE;
END;
commit;
END LOOP;
Error Handling in Bulk Binds:
=============================
Avoid
using commits after each record - this can hamper your code’s
performance. The Bulk Collect feature allows you to track errors at
single record level although you process records in Bulk.
Example:
SQL> create table t ( x varchar2(10) NOT NULL );
Table created.
SQL> declare
2 type array is table of varchar2(255);
3
4 l_data array := array( 'works 1', 'works 2',
5 'too long, much too long',
6 'works 4', NULL, 'works 6' );
7 l_start number := 1;
8 begin
9 loop
10 begin
11 forall i in l_start .. l_data.count
12 insert into t values ( l_data(i) );
13 EXIT;
14 exception
15 when others then
16 dbms_output.put_line( 'Bad row index = ' || (l_start+sql%rowcount) ||
17 ' ' || sqlerrm );
18 l_start := l_start + sql%rowcount + 1;
19 end;
20 end loop;
21 end;
22 /
Bad row index = 3 ORA-01401: inserted value too large for column
Bad row index = 5 ORA-01400: cannot insert NULL into ("OPS$TEST"."T"."X")
PL/SQL procedure successfully completed.
SQL> select * from t;
X
----------
works 1
works 2
works 4
works 6
SQL>
Taking Advantage of Partition Pruning
=====================================
Large
tables are partitioned by DBAs for better manageability, administration
and performance. You need to be aware of tables that are partitioned
and write SQL queries that could take advantage of those partitions.
When a query uses the conditions which are also used to create
partitions, Oracle does a “Partition Pruning” and scans records only in
the relevant partitions, thus saving resources and improving
performance.
Example:
CREATE TABLE CCC_FAD_JOURNAL_DETAILS (
PP_LINE_ID NUMBER (15) NOT NULL,
PERIOD_PART NUMBER (2),
JOURNAL_HEADER_ID NUMBER (15),
JOURNAL_LINE_NBR NUMBER (15),
PERIOD VARCHAR2 (15),
SET_OF_BOOKS_ID NUMBER (15),
CODE_COMBINATION_ID NUMBER (15),
JOURNAL_BATCH VARCHAR2 (100),
JOURNAL_NAME VARCHAR2 (100),
JOURNAL_CATEGORY VARCHAR2 (25),
JOURNAL_DESCRIPTION VARCHAR2 (240),
AFF_COMPANY VARCHAR2 (25),
AFF_ACCOUNT VARCHAR2 (25),
AFF_CENTER VARCHAR2 (25),
AFF_BASE_VARIABLE VARCHAR2 (25),
AFF_MODALITY VARCHAR2 (25),
AFF_MARKET_SEGMENT VARCHAR2 (25),
AFF_FOLDER VARCHAR2 (25),
AFF_PRODUCT_SOURCE VARCHAR2 (25),
AFF_DESTINATION VARCHAR2 (25),
LOCAL_CURRENCY_AMOUNT NUMBER,
LOCAL_CURRENCY_DBCR_INDICATOR VARCHAR2 (1),
JOURNAL_TRANSACTION_AMOUNT NUMBER,
JOURNAL_TRANS_DBCR_INDICATOR VARCHAR2 (1) NOT NULL,
TRANSACTION_CURRENCY_CODE VARCHAR2 (15),
DATE_CREATED DATE,
JOURNAL_SOURCE VARCHAR2 (25),
TRANSACTION_CURRENCY_RATE NUMBER,
USER_SEQUENCE NUMBER (15),
ACCOUNTING_DATE DATE,
ORDER_NUMBER VARCHAR2 (150),
INVENTORY_ITEM_NUMBER VARCHAR2 (150),
QUANTITY VARCHAR2 (150),
UNIT_OF_MEASURE VARCHAR2 (150),
DOCUMENT_NUMBER VARCHAR2 (150),
DOCUMENT_DATE VARCHAR2 (150),
PROJECT_NUMBER VARCHAR2 (150),
DOCUMENT_NUMBER_TWO VARCHAR2 (150),
SHIPPED_DATE VARCHAR2 (150),
SL_REFERENCE_ID NUMBER)
TABLESPACE USERS
PCTFREE 10
PCTUSED 40
INITRANS 10
MAXTRANS 255
STORAGE (
)
PARTITION BY RANGE (PERIOD_PART)
(
PARTITION JOURNL_DETLS01 VALUES LESS THAN (2) TABLESPACE JOURNL_DETLS01 ),
PARTITION JOURNL_DETLS02 VALUES LESS THAN (3) TABLESPACE JOURNL_DETLS02 ),
PARTITION JOURNL_DETLS03 VALUES LESS THAN (4) TABLESPACE JOURNL_DETLS03 ),
PARTITION JOURNL_DETLS04 VALUES LESS THAN (5) TABLESPACE JOURNL_DETLS04 ),
PARTITION JOURNL_DETLS05 VALUES LESS THAN (6) TABLESPACE JOURNL_DETLS05 ),
PARTITION JOURNL_DETLS06 VALUES LESS THAN (7) TABLESPACE JOURNL_DETLS06 ),
PARTITION JOURNL_DETLS07 VALUES LESS THAN (8) TABLESPACE JOURNL_DETLS07 ),
PARTITION JOURNL_DETLS08 VALUES LESS THAN (9) TABLESPACE JOURNL_DETLS08 ),
PARTITION JOURNL_DETLS09 VALUES LESS THAN (10) TABLESPACE JOURNL_DETLS09 ),
PARTITION JOURNL_DETLS10 VALUES LESS THAN (11) TABLESPACE JOURNL_DETLS10 ),
PARTITION JOURNL_DETLS11 VALUES LESS THAN (12) TABLESPACE JOURNL_DETLS11 ),
PARTITION JOURNL_DETLS12 VALUES LESS THAN (13) TABLESPACE JOURNL_DETLS12 ),
PARTITION JOURNL_DETLS49 VALUES LESS THAN (MAXVALUE) TABLESPACE JOURNL_DETLS49)
);
For
this table if you want data for only last 2 periods, it is better to
have a filter on the column PERIOD_PART than a date range. This will
ensure that Oracle does partition pruning.
Similarly
if you want to update all the records in this table, instead of trying
to do it at one shot, split up the update partition-wise and run
multiple partition updates in parallel. This will speed up the process
manifolds. The issue to be considered here is to check the available
resources and decide on number of parallel processes.
PRO* C Tuning Guidelines
========================
Use Precompiler Options
=======================
Use HOLD_CURSOR=YES and RELEASE_CURSOR=NO pre-compiler options while
pre-compiling. Using these options when MODE=ANSI will not make use of these options.
These two parameters can be entered in a line in the program with the following syntax:
EXEC ORACLE OPTION (HOLD_CURSOR=NO);
EXEC ORACLE OPTION (RELEASE_CURSOR=YES);
MAXOPENCURSORS
(default value 10) signifies the initial size of the program cursor
cache. The size of the cursor cache may grow, depending on the values of
HOLD_CURSOR and RELEASE_CURSOR. So, its better to increase the
MAXOPENCURSORS value in pre-compiling options.
==================================
Define Initial Columns in Composite Indexes
===========================================
Use WHERE Instead of HAVING
===========================
Avoid Nested Queries
====================
Using Driving Table in RULE Based Optimizer
===========================================
'Not exists' and Outer Joins
============================
Avoid NOT EQUAL
===============
Use Function Based Index for NVL Columns
========================================
Reduce the Number of Trips to the Database
==========================================
Use TRUNCATE Instead of DELETE
==============================
Use DECODE
==========
Use Count(*) to Count Rows from Table
=====================================
Use UNION-ALL Instead of UNION (Where Possible)
===============================================
Use WHERE Instead of ORDER BY Clause
====================================
Use IN Instead of OR
====================
Use Full Table Scans
====================
When to go for Fast Full Scans
==============================
Bitmap Indexes
==============
Using composite indexes
=======================
Create Indexes on Foreign Key
=============================
Use Bind Variables
==================
Avoid Unnecessary Database Calls
================================
==================
Avoid BINARY_INTEGER
====================
Pinning Stored Procedures
=========================
Use WHEN Clause in Triggers
===========================
Use AFTER Row Triggers
======================
Use Bulk Binding
================
Use LIMIT clause
================
==============================
Error Handling in Bulk Binds:
=============================
Taking Advantage of Partition Pruning
=====================================
PRO* C Tuning Guidelines
========================
Use Precompiler Options
=======================
No comments:
Post a Comment