Let's try a simple scenario; we are querying a table big_table, doing full table scan and we know that there are about 2 million blocks in the table. However, the query is reading much more data than these 2 million blocks. What is the reason? The simple answer is that oracle is reading data from Undo tablespace, to ensure read consistency. Is it possible that big_table was modified heavily and data was not committed yet.
One way to validate this assumption is to run the query below, where the SID for the FTS on big_table is 161:
SQL> select NAME from v$datafile where FILE# in ( select p1 from v$session_wait where sid in ( select sid from sid=161));
NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/testdb_ora/ora_data01/undo01.dbf
We can say from the output that we are reading data from the Undo tablespace.
No comments:
Post a Comment