Tutorial :Hidden features in Oracle



Question:

I enjoyed the answers and questions about hidden features in sql server

What can you tell us about Oracle?
Hidden tables, inner workings of ..., secret stored procs, package that has good utils...


Solution:1

Since Apex is now part of every Oracle database, these Apex utility functions are useful even if you aren't using Apex:

SQL> declare    2    v_array apex_application_global.vc_arr2;    3    v_string varchar2(2000);    4  begin    5      6    -- Convert delimited string to array    7    v_array := apex_util.string_to_table('alpha,beta,gamma,delta', ',');    8    for i in 1..v_array.count    9    loop   10      dbms_output.put_line(v_array(i));   11    end loop;   12     13    -- Convert array to delimited string   14    v_string := apex_util.table_to_string(v_array,'|');   15    dbms_output.put_line(v_string);   16  end;   17  /  alpha  beta  gamma  delta  alpha|beta|gamma|delta    PL/SQL procedure successfully completed.  


Solution:2

"Full table scans are not always bad. Indexes are not always good."

An index-based access method is less efficient at reading rows than a full scan when you measure it in terms of rows accessed per unit of work (typically per logical read). However many tools will interpret a full table scan as a sign of inefficiency.

Take an example where you are reading a few hundred invoices frmo an invoice table and looking up a payment method in a small lookup table. Using an index to probe the lookup table for every invoice probably means three or four logical io's per invoice. However, a full scan of the lookup table in preparation for a hash join from the invoice data would probably require only a couple of logical reads, and the hash join itself would cmoplete in memory at almost no cost at all.

However many tools would look at this and see "full table scan", and tell you to try to use an index. If you do so then you may have just de-tuned your code.

Incidentally over reliance on indexes, as in the above example, causes the "Buffer Cache Hit Ratio" to rise. This is why the BCHR is mostly nonsense as a predictor of system efficiency.


Solution:3

The cardinality hint is mostly undocumented.

 explain plan for   select /*+ cardinality(@inner 5000) */ *   from   (select /*+ qb_name(inner) */ * from dual)   /   select * from table(dbms_xplan.display)   /   --------------------------------------------------------------------------   | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |   --------------------------------------------------------------------------   |   0 | SELECT STATEMENT  |      |  5000 | 10000 |     2   (0)| 00:00:01 |   |   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |   --------------------------------------------------------------------------  


Solution:4

The Buffer Cache Hit Ratio is virtually meaningless as a predictor of system efficiency


Solution:5

You can view table data as of a previous time using Flashback Query, with certain limitations.

Select *    from my_table as of timestamp(timestamp '2008-12-01 15:21:13')  

11g has a whole new feature set around preserving historical changes more robustly.


Solution:6

Frequent rebuilding of indexes is almost always a waste of time.


Solution:7

wm_concat works like the the MySql group_concat but it is undocumented.

with data:

-car-   -maker-  Corvette Chevy  Taurus   Ford  Impala   Chevy  Aveo     Chevy    select wm_concat(car) Cars, maker from cars  group by maker  

gives you:

-Cars-                   -maker-  Corvette, Impala, Aveo   Chevy  Taurus                   Ford  


Solution:8

The OVERLAPS predicate is undocumented.

http://oraclesponge.wordpress.com/2008/06/12/the-overlaps-predicate/


Solution:9

I just found out about the pseudo-column Ora_rowSCN. If you don't set your table up for this, this pcolumn gives you the block SCN. This could be really useful for the emergency, "Oh crap I have no auditing on this table and wonder if someone has changed the data since yesterday."

But even better is if you create the table with Rowdependecies ON. That puts the SCN of the last change on every row. This will help you avoid a "Lost Edit" problem without having to include every column in your query.

IOW, when you app grabs a row for user modification, also select the Ora_rowscn. Then when you post the user's edits, include Ora_rowscn = v_rscn in addition to the unique key in the where clause. If someone has touched the row since you grabbed it, aka lost edit, the update will match zero rows since the ora_rowscn will have changed.

So cool.


Solution:10

If you get the value of PASSWORD column on DBA_USERS you can backup/restore passwords without knowing them:

 ALTER USER xxx IDENTIFIED BY VALUES 'xxxx';  


Solution:11

Bypass the buffer cache and read straight from disk using direct path reads.

alter session set "_serial_direct_read"=true;  

Causes a tablespace (9i) or fast object (10g+) checkpoint, so careful on busy OLTP systems.


Solution:12

More undocumented stuff at http://awads.net/wp/tag/undocumented/

Warning: Use at your own risk.


Solution:13

I don't know if this counts as hidden, but I was pretty happy when I saw this way of quickly seeing what happened with a SQL statement you are tuning.

SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL;    SELECT * FROM TABLE(dbms_xplan.display_cursor( NULL, NULL, 'RUNSTATS_LAST'))  ;    PLAN_TABLE_OUTPUT  -----------------------------------------------------  SQL_ID  5z36y0tq909a8, child number 0  -------------------------------------  SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL    Plan hash value: 272002086    ---------------------------------------------------------------------------------------------  | Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  ---------------------------------------------------------------------------------------------  |   1 |  TABLE ACCESS FULL| DUAL |      1 |      1 |      1 |00:00:00.02 |       3 |      2 |  ---------------------------------------------------------------------------------------------      12 rows selected.  

Where:

  • E-Rows is estimated rows.
  • A-Rows is actual rows.
  • A-Time is actual time.
  • Buffers is actual buffers.

Where the estimated plan varies from the actual execution by orders of magnitude, you know you have problems.


Solution:14

Not a hidden feature, but Finegrained-access-control (FGAC), also known as row-level security, is something I have used in the past and was impressed with the efficiency of its implementation. If you are looking for something that guarantees you can control the granularity of how rows are exposed to users with differing permissions - regardless of the application that is used to view data (SQL*Plus as well as your web app) - then this a gem.

The built-in fulltext indexing is more widely documented, but still stands out because of its stability (just try running a full-reindexing of fulltext-indexed columns on similar data samples on MS-SQL and Oracle and you'll see the speed difference).


Solution:15

WITH Clause


Solution:16

Snapshot tables. Also found in Oracle Lite, and extremely useful for rolling your own replication mechanism.


Solution:17

@Peter

You can actually bind a variable of type "Cursor" in TOAD, then use it in your statement and it will display the results in the result grid.

exec open :cur for select * from dual;  


Solution:18

Q: How to call a stored with a cursor from TOAD?

A: Example, change to your cursor, packagename and stored proc name

declare cursor PCK_UTILS.typ_cursor;      begin         PCK_UTILS.spc_get_encodedstring(            'U',            10000002,            null,            'none',            cursor);    end;  


Solution:19

The Model Clause (available for Oracle 10g and up)


Solution:20

WM_CONCAT for string aggregation


Solution:21

Scalar subquery caching is one of the most surprising features in Oracle

-- my_function is NOT deterministic but it is cached!  select t.x, t.y, (select my_function(t.x) from dual)  from t    -- logically equivalent to this, uncached  select t.x, t.y, my_function(t.x) from t  

The "caching" subquery above evaluates my_function(t.x) only once per unique value of t.x. If you have large partitions of the same t.x value, this will immensely speed up your queries, even if my_function is not declared DETERMINISTIC. Even if it was DETERMINISTIC, you can safe yourself a possibly expensive SQL -> PL/SQL context switch.

Of course, if my_function is not a deterministic function, then this can lead to wrong results, so be careful!


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »