Infolinks

Tuesday, 3 July 2012

NOTES



======CODD RULES==
Codd's Rules

1. Information Rule: All information in a relational database including table names, column names are represented by values in tables (data dict tables/cols).
2. Guaranteed Access Rule: Each and every datum (atomic value) in a relational database is guaranteed to be logically accessible by resorting to a combination of table name, primary key value, and column name (accessible thru tab/col/PK).
3. Systematic Treatment of Nulls Rule: The RDBMS handles records that have unknown or inapplicable values in a systematic way (nulls).
4. Active On-line catalog based on the relational model: The description of a database and in its contents are database tables and therefore can be queried on-line via the data manipulation language (query dict with sql).
5. Comprehensive Data Sub-language Rule: A RDBMS may support several languages. But at least one of them should allow user to do all of the following: define tables and views, query and update the data, set integrity constraints, set authorizations and define transactions (sql lang DDL).
6. View Updating Rule: Any view that is theoretically updateable can be updated using the RDBMS(view update).
7. High-Level Insert, Update and Delete: The RDBMS supports insertions, updation and deletion at a table level in addtion to selection(dml).
8. Physical Data Independence: The execution of adhoc requests and application programs is not affected by changes in the physical data access and storage methods (physical Ind.).
9. Logical Data Independence: Logical changes in tables and views such adding/deleting columns or changing fields lengths need not necessitate modifications in the applications (logical Ind.).
10. Integrity Independence: Like table/view definition, integrity constraints are stored in the on-line catalog and not in the application programs (integrity Ind.const in db).
11. Distribution Independence: Application programs and adhoc requests are not affected by change in the distribution of physical data (distribution Ind.).
12. No subversion Rule: If the RDBMS has a language that accesses the information of a record at a time, this language should not be used to bypass the integrity constraints.(bypass no integrity cons)

======ORACLE==
Oracle Architecture
An Oracle server: a DBMS that provides an open, comprehensive, integrated approach to information management, Consists of an Instance and a database.
An Oracle Instance: a means to access an Oracle database, always opens one and only one database and consists of memory structures and background process.
An Oracle database: a collection of data that is treated as a unit, Consists of Datafiles, Control files, Redo log files. (optional param file, passwd file, archived log)

Instance memory Structures:

System Global Area (SGA): Allocated at instance startup, and is a fundamental component of an Oracle Instance. SGA Memory structures: Includes Shared Pool,  Database Buffer Cache, Redo Log Buffer among others.
Shared Pool : Consists of two key performance-related memory structures Library Cache and  Data Dictionary Cache.
Library Cache: Stores information about the most recently used SQL and PL/SQL statements and enables the sharing of commonly used statements.
Data Dictionary Cache : Stores collection of the most recently used definitions in the database Includes db files, tables, indexes, columns etc. Improves perf. During the parse phase, the server process looks at the data dictionary for information to resolve object names and validate access.
Database Buffer Cache: Stores copies of data blocks that have been retrieved from the datafiles. Everything done here.
Redo Log Buffer : Records all changes made to the database data blocks, Primary purpose is recovery. Redo entries contain information to reconstruct or redo changes.
User process: Started at the time a database User requests connection to the Oracle server. requests interaction with the Oracle server, does not interact directly with the Oracle server.
Server process: Connects to the Oracle Instance and is Started when a user establishes a session. fulfills calls generated and returns results. Each server process has its own nonshared PGA when the process is started.
Server Process Parses and run SQL statements issued through the application, Reads necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA and Return results in such a way that the application can process the information.
In some situations when the application and Oracle Database operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead.
Program Global Area (PGA):  Memory area used by a single Oracle server process. Allocated when the server process is started, deallocated when the process is terminated and used by only one process. Used to process SQL statements and to hold logon and other session information.

Background processes:

Started when an Oracle Instance is started. Maintains and enforces relationships between physical and memory structures. Mandatory background processes: SMON, PMON, DBWn, LGWR, CKPT.
DBWn: Writes when: Checkpoint occurs,  Dirty buffers reach threshold, There are no free buffers, Timeout occurs, TBS level alterations.
LGWR: Writes when: Commit, When one-third full, Every three seconds, Before DBWn writes, When there is 1 MB of redo.
SMON: Responsibilities: Opens database, Instance recovery, Rolls forward changes in redo logs, Rolls back uncommitted transactions, Coalesces free space, Deallocates temporary segments.
PMON: Cleans up after failed processes by: Rolling back the transaction, Releasing locks, Releasing other resources, Restarting dead dispatchers.
CKPT:  Responsiblities : Signaling DBWn at checkpoints, Updating datafile headers and control files with checkpoint information.
======================

PRAGMA_SERIALLY_REUSABLE
PRAGMA_SERIALLY_REUSABLE: Package vars default value ll be reset at every call in a session if u use this pragma in the pkg. else it wont.

Example of Serially Reusable Packages :How Package Variables Act Across Call Boundaries

This example has a serially reusable package specification (there is no body).

CONNECT Scott/Tiger
CREATE OR REPLACE PACKAGE Sr_pkg IS
  PRAGMA SERIALLY_REUSABLE;
  N NUMBER := 5;                -- default initialization
END Sr_pkg;

Suppose your Enterprise Manager (or SQL*Plus) application issues the following:
CONNECT Scott/Tiger

# first CALL to server
BEGIN
   Sr_pkg.N := 10;
END;

# second CALL to server
BEGIN
   DBMS_OUTPUT.PUT_LINE(Sr_pkg.N);
END;

This program prints:
5

Note:  If the package had not had the pragma SERIALLY_REUSABLE, the program would have printed '10'.
====================
DBMS_PIPE
DBMS_PIPE
Eg:

Session #1:

DECLARE
 l_status NUMBER(2);
BEGIN

l_status := DBMS_PIPE.CREATE_PIPE (pipename => 'TEST_PIPE',   maxpipesize => 8192, private => TRUE);

DBMS_PIPE.PACK_MESSAGE(item => 'SELECT SYSDATE FROM DUAL;');

l_status := DBMS_PIPE.SEND_MESSAGE(pipename => 'TEST_PIPE');

l_status := DBMS_PIPE.REMOVE_PIPE (pipename => 'TEST_PIPE');

END;

Session #2:
DECLARE
 l_received_message VARCHAR2(128);
 l_message_type NUMBER(2);
 l_status NUMBER(2);
BEGIN

l_status := DBMS_PIPE.RECEIVE_MESSAGE(pipename => 'TEST_PIPE');
l_message_type := DBMS_PIPE.NEXT_ITEM_TYPE;

IF (l_message_type = 9) THEN
 DBMS_PIPE.UNPACK_MESSAGE(item => l_received_message);
 EXECUTE IMMEDIATE(l_received_message);
END IF;

END;
=========================================
Normalization Rules
1st NF : No Repeating groups : Remove any repeating groups along with the primary key to a new table.


2nd NF: No Partial Dependencies :If columns are dependent on only a part of the primary key, extract them along with the part of the primary key to a new table.


3rd NF: No Transitive Dependencies:  If non-key columns are dependent on another non-key column, extract them to a new table and identify a primary key.


4th NF Separates independent multi-valued facts to separate tables.


5th NF Breaks out data redundancy not covered by previous NFs.


ELIMINATE DATA NOT DEPENDENT ON KEY. All non key cols should be dependent on the Primary key. (Then the table is said to have functional dependence.)The aim of normalization Is to arrive at a set of tables that are fully functionally dependent.
====
Top Ten Mistakes Found in Oracle Systems

1. Bad Connection Management
2. Bad Use of Cursors and the Shared Pool
3. Bad SQL
4. Use of Nonstandard Initialization Parameters
5. Getting Database I/O Wrong
6. Redo Log Setup Problems
7. Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.
8. Long Full Table Scans
9. High Amounts of Recursive (SYS) SQL
10. Deployment and Migration Errors
============
Table Space Point in Time Recovery
TSPITR Works With an RMAN-Managed Auxiliary Instance - Oracle database instance used in the recovery process to perform the actual work of recovery.

The target time, the point in time or SCN that the tablespace will be left at after TSPITR
The recovery set, which consists of the datafiles containing the tablespaces to be recovered;
The auxiliary set, which includes datafiles required for TSPITR of the recovery set which are not themselves part of the recovery set - SYSTEM tablespace, UNDO segments.
The auxiliary instance has other files associated with it, such as a control file, parameter file, and online logs , but they are not part of the auxiliary set.
The auxiliary destination, an optional location on disk which can be used to store any of the auxiliary set datafiles, control files and online logs of the auxiliary instance.

To recover a truncated table, logical corruption, incorrect batch job etc. DB has to be in Arch Log mode.

Cannot recover dropped tablespaces, tables without their associated constraints, or constraints without the associated tables.
=================
Grouping Functions

The key advantages of CUBE and ROLLUP are that they will allow more robust aggregations than COMPUTE and they work with any SQL-enabled tool.

GROUPING SETS allows to specify more than one GROUP BY options in the same record set. Logically expressed as seperate GROUP BY queries connected by UNION.

            select a, b, SUM( c ) FROM tab1 GROUP BY GROUPING SETS ( (a,b), a) ==  select ... tab1 GROUP BY a, b  UNION  select a, null, SUM( c ) ...GROUP BY a

ROLLUP creates subtotals moving left to right using columns provided in ROLLUP. A grand total is provided only if the ROLLUP includes all the cols in GROUP BY clause.

              The order of the columns in the ROLLUP modifier is significant, When you reverse the order, you get different subtotal.
              GROUP BY ROLLUP (a, b, c) is equivalent to GROUPING SETS ( (a, b, c), (a, b), (a), ( )).

The CUBE modifier in the GROUP BY clause creates subtotals for all possible combinations of grouping columns. order of cols is not significant here unlike ROLLUP.

              GROUP BY CUBE( a, b, c) is equivalent to  GROUP BY GROUPING SETS ( (a, b, c), (a, b), (b, c), (a, c), (a), (b), (c), ( )).

           
The GROUPING(expr) function returns a 1 for these summary rows (which has a null) and a 0 for the nonsummary rows for expr. Only for ROLLUP and CUBE.
           
              You can use the GROUPING function in the HAVING clause to filter out rows.

GROUPING_ID (<col_list>) - Simplifies the above grouping function. It is derived from the GROUPING function by concatenating the GROUPING levels together as bits.
          
               The GROUPING_ID function returns the exact level of the group.

GROUP_ID( ) takes no arguments and requires a GROUP BY clause. GROUP_ID returns a numeric identifier that can be used to uniquely identify duplicate groups.

             Assigns a unique number for each duplicate group.For i duplicate groups, It will return values 0 through i-1. Can be used in HAVING to filter out duplicate rows.
==========
Row Chaning and Migration
Row Migration:
A row migrates when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently).  A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.


Row Chaining:
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces.
Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces.
So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks
========
PIVOT and UNPIVOT
PIVOT:

 select * from
 (
     (
        select 'a' value  from dual union all
        select 'e' value  from dual union all
        select 'i'  value  from dual union all
        select 'o' value  from dual union all
        select 'u' value  from dual
    )
   pivot  -- use pivot xml to get the o/p in xml
    (
       count( value)
       for value in   ('a','e','i','o','u')
    )
 )

UNPIVOT:

 select value from
 (
    (
        select
            'a' v1,
            'e' v2,
            'i' v3,
            'o' v4,
            'u' v5
        from dual
    )
    unpivot [INCLUDE NULLS]
    (
        value
        for value_type in
            (v1, v2,v3,v4,v5) – Also can give ANY here.
    )
 )

==============
READ ONLY TRANSACTIONS
During a read-only transaction, all queries refer to the same snapshot of the database***, providing a multi-table, multi-query, read-consistent view. Other users can continue to query or update data as usual. A commit or rollback ends the transaction. eg: a store manager uses a read-only transaction to gather order totals for the day, the past week, and the past month. The totals are unaffected by other users updating the database during the transaction.

command:

SET TRANSACTION READ ONLY;
=======
Materialized Views
stores aggregates of local tables in DWH or Remote copies.

Use of Materialized Views : This is relatively straightforward and is answered in a single word – performance.

Params:
COMPATIBLE should be set to 8.1.0 or above
QUERY_REWRITE_ENABLED –  TRUE to enable. The default value is FALSE.
QUERY_REWRITE_INTEGRITY - ENFORCED –  only when constraints and rules are enforced and guaranteed by Oracle. no chance of getting incorrect or stale data.| TRUSTED  - even if some constraints are not enforced | STALE_TOLERATED - when data is out-of-sync with the details.

create materialized view mv
build immediate
refresh on commit
enable query rewrite
as select ...


When you create a materialized view using the FAST option you will need to create a view log on the master tables(s). A master table can have only one materialized view log defined on it. 
create materialized view log on t WITH ROWID ; -- Columns : M_ROW$$, SNAPTIME$$, DMLTYPE$$, OLD_NEW$$, CHANGE_VECTOR$$


REFRESH CLAUSE:
[refresh [fast|complete|force]
 [on demand | commit]
 [start with date] [next date]
 [with {primary key|rowid}]]

The FAST refreshes use the materialized view logs to send the rows that have changed from master tables to the materialized view.
The COMPLETE refresh re-creates the entire materialized view.
With FORCE clause, Oracle will perform a fast refresh if one is possible or a complete refresh otherwise. FORCE is the default.

The START WITH clause tells the database when to perform the first replication from the master table to the local base table. It should evaluate to a future point in time. The NEXT clause specifies the interval between refreshes

WITH PRIMARY KEY is based on the primary key of the master table instead of ROWID.

PRIMARY KEY is the default option. To use the PRIMARY KEY clause you should have defined

PRIMARY KEY on the master table or else you should use ROWID based materialized views.
=======
HANDLING PLSQL COLLECTION EXCEPTIONS
DECLARE
  ex_dml_errors EXCEPTION;
  PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
BEGIN
      FORALL i IN l_tab.first .. l_tab.last SAVE EXCEPTIONS
      INSERT INTO exception_test  VALUES l_tab(i);
EXCEPTION
    WHEN ex_dml_errors THEN
        l_error_count := SQL%BULK_EXCEPTIONS.count;
        FOR i IN 1 .. l_error_count LOOP
        DBMS_OUTPUT.put_line('Error: ' || i ||
          ' Array Index: ' || SQL%BULK_EXCEPTIONS(i).error_index ||
          ' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END;
==========
TIMESTAMP Datatypes
TIMESTAMP WITH TIMEZONE - data is stored with a record kept of the time zone to which it refers.
TIMESTAMP WITH LOCAL TIMEZONE - data is normalized to the DB time zone on saving. When retrieved, it is normalized to the time zone of the user process selecting it.

INTERVAL YEAR TO MONTH - contains year-month
INTERVAL '123-2' YEAR(3) TO MONTH --the 3 suffix specifies the no. of digits to be consideres for YEAR.
INTERVAL '5-3' YEAR TO MONTH + INTERVAL'20' MONTH =  INTERVAL '6-11' YEAR TO MONTH

INTERVAL DAY TO SECOND - contains day-hour-min-sec
INTERVAL'20' DAY - INTERVAL'240' HOUR = INTERVAL'10-0' DAY TO SECOND
============
ACID Properties
1. A Is for Atomicity: The principle of atomicity states that all parts of a transaction must complete, or none of them.

2. C Is for Consistency: The principle of consistency states that the results of a query must be consistent with the state of the database at the time the query started.

3. I Is for Isolation: The principle of isolation states that an incomplete (that is, uncommitted) transaction must be invisible to the rest of the world.

4. D Is for Durability: The principle of durability states that once a transaction completes with a COMMIT, it must be impossible for the database to lose it.

===============
Database Tuning Tips
Do as little as possible -  Focus on simplyfing the processes instead of achiving a theoretical design perfection.
Avoid multiple IO to the database - bundle queries to a procedure and call them in ur app. again in procedures  reduce IO to the db.
Avoid multiple connections to the DB - Connecting to the db is the slowest process.
Store data the way u need it.
Go Atomic - Divide and conquer. Use partitions, Local Indexes.
Spread data across disks - Use different tablespaces for tables and indexes and partitions.
Use MVs  - reduce operations performed on tables.
Use parallelism - almost everything can be parallelized.
Avoid the need to query undo segements. - It need to look multipe locations for a piece of data.
Keep stats updated.
Use Disk caching.
Use Larger block size. - keeps more data in the memory.
Make sure blocks are densly packed. - adjust pctfree. mind row chaining and migration.
Try to perform sorts in memory.
Use a cpu with SMALL no of FAST processors.
Use Standard initailization params.
Use proper use of redo logs, checkpoints.
Have sufficient rollback segments.
Small tables can the kept in the keep pool using the buffer_pool clause of create table.
Be free from Deployment and Migration Errors.
============
SQL Tuning Tips
Avoid un-necessary sorts - use union all, use compute stats while creating index.

Hint when needed.

Avoid Cartesian products

Avoid full table scans on large tables

Avoid joining too many tables

Use SQL standards and conventions to reduce parsing

Try to use indexed cols in the  WHERE clause

Use compound indexes with care (Do not repeat columns)

Skip -scan index (INDEX_SS)- to use a composite index even if the leading column is not a limiting condition in the sql.

Monitor index browning (due to deletions; rebuild as necessary)

Use literals in the WHERE clause (use bind variables)

Monitor V$SESSION_LONGOPS to detect long running operations

Use the SHARED_CURSOR parameter

Finally Test Correctly - with Large volumes, Large users.
============
Explain Plan: Execution Paths and Hints

USE_MERGE - two inputs are seperately sorted and merged. No sort is required if the table has indexes. waits until both the sorts are performed.

Good when NO INDEXES are in the tables.
When both are almost equal in size. big or small.

USE_NL - for each record in the first table the second will be queried. so its good if there is an index in the second (driven) table. the best selective should be the driver.

When the tables are unequal in size.
When u need the FIRST_ROWS. (FOR OLTP)

USE_HASH - two tables are compared in memory. One is loaded in memory and hash functions are applied. Then the hashing function compares the other.

When a table is small.
For OLTP.

The table listed first in a set of tables of the same level, is the driving table.

For views, Hints can be given as  view.tab_alias.table or the system generated alias like @SEL$2  t1 index_name. if u want to generate query block name, use QB_NAME(qb) FULL(@qb e)  where e is the table alias.

If a hint specifies an unavailable access path, then the optimizer ignores it.

Three operations sort rows without grouping - order by (SORT ORDER BY), distinct and set operators (SORT UNIQUE NOSORT) and Joins (SORT JOIN)
Note: Union all is a row operation. Others uses set operations.

Grouping functions uses - SORT AGGREGATE and SORT GROUP BY if group by clause is specified.

Merging of query with view can be disabled with NO_MERGE

Oracle tries to combine subqueries with the rest of the query to get more execution options.

Hints:
OPT_PARAM - to set an initialization parameter for the duration of the current query only.

INDEX_SS  - suggest skip scan when the leading column of a composite index is not used.

INDEX_ASC, INDEX_DESC - san in specified order.

INDEX_FFS - Fast full scan

FIRST_ROWS, ALL_ROWS - Instruct the optimizer to choose the best approach to give the first or all rows.

ROWID - Table access by index rowid.

AND_EQUAL - merge results of multiple index scans.

DRIVING_SITE- instructs the optimizer to execute the query at a different site than that selected by the database. DRIVING_SITE(tab1), if table is  tabl1@r1, query is executed at rq.

CURSOR_SHARING_EXACT Will not replace literals in SQL statements with bind variables.

INDEX_JOIN - instructs the optimizer to use an index join as an access path.

USE_CONCAT - instructs the optimizer to transform combined OR-conditions in the WHERE clause of a query into a compound query using the UNION ALL set operator

REWRITE - instructs the optimizer to rewrite a query in terms of materialized views, when possible, without cost consideration.

PUSH_PRED - forces  join perdiacte into a view.
PUSH_SUBQ - forces non merged subqueries to be evaluated asap

FACT - instructs the optimizer that the table specified in tablespec should be considered as a fact table.

INDEX_COMBINE - instructs the optimizer to use a bitmap access path for the table.

MODEL_MIN_ANALYSIS - instructs the optimizer to omit some compile-time optimizations of spreadsheet rules.

MONITOR - forces real-time SQL monitoring for the query, even if the statement is not long running.

NATIVE_FULL_OUTER_JOIN - instructs the optimizer to use native full outer join, which is a native execution method based on a hash join.

NO_EXPAND - instructs the optimizer not to consider OR-expansion for queries having OR conditions or IN-lists in the WHERE clause.

STAR_TRANSFORMATION - instructs the optimizer to use the best plan in which the transformation has been used.

UNNEST - instructs the optimizer to unnest and merge the body of the subquery into the body of the query block that contains it, allowing the optimizer to consider them together when evaluating access paths and joins.


LEADING - instructs the optimizer to use the specified set of tables as the prefix in the execution plan.
ORDERED - instructs Oracle to join tables in the order in which they appear in the FROM clause.

PARALLEL ( tab, no_of_parallel_process, no_of instances) - each instance will span the no_of parallel process.

PARALLEL_INDEX - instructs the optimizer to use the specified number of concurrent servers to parallelize index range scans, full scans, and fast full scans for partitioned indexes.

PQ_DISTRIBUTE - instructs the optimizer how to distribute rows of joined tables among producer and consumer query servers. Such distribution can improve the performance of parallel join operations.

Statistics are stored in:

DBA_TAB_STATISTICS
DBA_TAB_COL_STATISTICS
DBA_TAB_HISTOGRAMS
============
Tuning Tools

 We all know auto tracle and explain plan. Lets see the uncommon

10g auto tools:


ADDM, Auto stats collection, Sql Tuning Advisor.

AWR collects, processes and maintains performace stats for problem dedection and self tuning.

SGA_TARGET - for tuning caches and pools.

End to End Tracing

trcsess - command line utility, merges trace info into a single file for processing with TKPROF.


11g auto Tools:

Automatic Memory Tuning - MEMORY_TARGET

Resource Manager

Automatic AWR Baselines

Adaptive Metric Baselines

TKPROF:
The TKPROF program converts Oracle trace files into a more readable form

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;

ALTER SESSION SET SQL_TRACE = TRUE;

TKPROF <trace-file> <output-file> explain=user/password@service table=sys.plan_table
============
Cardianlity and Selectivity

Cardinality - no of distinct rows.

Selectivity -  no of distinct rows./ total rows.

Highly selective index - has small no of distinct values, for low selective indexes better do a full table scan.

If the optimizer feels that the selectivity of the index is low it ignores. In that case we give hints.
===============
Mailing Utilities
UTL_SMTP:
  l_mail_conn   UTL_SMTP.connection;
BEGIN
  l_mail_conn := UTL_SMTP.open_connection(l_mailhost, 25);
  UTL_SMTP.helo(l_mail_conn, l_mailhost);
  UTL_SMTP.mail(l_mail_conn, l_from);
  UTL_SMTP.rcpt(l_mail_conn, l_to);

  UTL_SMTP.open_data(l_mail_conn);

  UTL_SMTP.write_data(l_mail_conn, 'Date: '    || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') || Chr(13));
  UTL_SMTP.write_data(l_mail_conn, 'From: '    || l_from || Chr(13));
  UTL_SMTP.write_data(l_mail_conn, 'Subject: ' || l_subject || Chr(13));
  UTL_SMTP.write_data(l_mail_conn, 'To: '      || l_to || Chr(13));
  UTL_SMTP.write_data(l_mail_conn, ''          || Chr(13));

  FOR i IN 1 .. 10 LOOP
    UTL_SMTP.write_data(l_mail_conn, 'This is a test message. Line ' || To_Char(i) || Chr(13));
  END LOOP;

  UTL_SMTP.close_data(l_mail_conn);

  UTL_SMTP.quit(l_mail_conn);
END;


UTL_MAIL:

UTL_MAIL.SEND(sender => 'xxx@oracleracexpert.com',
recipients => 'xxx@oracleracexpert.com',
cc => 'xxx@oracleracexpert.com',
bcc => 'xxx@oracleracexpert.com',
subject => 'Test Mail',
message => 'Hi, This is just a test mail');

UTL_MAIL.SEND_ATTACH_VARCHAR2(sender => 'xxx@oracleracexpert.com',
recipients => 'xxx@oracleracexpert.com',
cc => 'xxx@oracleracexpert.com',
bcc => 'xxx@oracleracexpert.com',
subject => 'Test Mail',
message => 'Hi, This is just a test mail'
attachment => ‘text’
att_filename => ‘test_attach.txt’);

=================

No comments:

Post a Comment