There are very few things that I know but there are tons of things which I don’t know. I came across one more such thing like this few days ago when this question (http://forums.oracle.com/forums/thread.jspa?messageID=2713561�) came over OTN. Interestingly, what the OP mentioned was indeed true and I was not having a clue what’s going on. Like most of the times, JL had an answer to the puzzle and he gave a lead. This blog post is not about any “new” invention that I have done. Its merely a collection of my understanding about a new mechanism in >10g SGAs. So with this note ,let’s get the party started.
Dirty Buffers & Redo Log Buffer
In Oracle buffer cache, buffers stay in multiple states. When a buffer is not changed and is just available for use, it’s a clean buffer. When the buffer encounters a change, the buffer status is marked as Dirty buffer. This status is used by Oracle to provide read-consistancy for the other users who would access the same buffer but when the transaction is alive. In simple terms, a dirty buffer is that one who got an image in it which is not the one when it was brought into the cache. Oracle maintains for this dirty buffer, CR(consistent read) buffers aka snapshot blocks which provide read conistancy for the others.
The change vectors for this buffer are also logged in the redo log buffer before they get applied to the actual buffers(write ahead logging) , using which Oracle maintains this assurance that no matter what, if we have the data in the log buffer, its safe. The data in the log buffer is written by LGWR to the redo log files. Once there, its permanently saved and hence protected against any crash.
In order to log changes in the log buffer, oracle typically follows this process
1) A calculation is done that how much space would be needed in the log buffer.
2) Server process acquires Redo Copy Latch. This is used for making the announcement that some redo will be copied into the log buffer.
3) Acquires Redo Allocation Latch. This is the actual latch which will allocate the space in the log buffer.
4) Redo Allocation Latch is released after the space is acquired in the log buffer.
5) Redo copy latch is used to copy the redo change vectors in the log buffers.
6) Redo copy latch is released.
After the copy is copy is complete and applying them over the buffers in the buffer cache, depending upon various conditions, LGWR may start writing the contents into the log files. How many latches we have got have at the moment for our database can be seen from V$latch_childeren. Here is an output from my 11gr1 machine.
1 select latch#,child#,name from V$latch_children
2* where name like ‘%redo%’ order by 1,2
SQL> /
LATCH# CHILD# NAME
———- ———- —————————————————————-
169 1 redo copy
169 2 redo copy
169 3 redo copy
169 4 redo copy
170 1 redo allocation
170 2 redo allocation
170 3 redo allocation
170 4 redo allocation
170 5 redo allocation
170 6 redo allocation
170 7 redo allocation
170 8 redo allocation
170 9 redo allocation
170 10 redo allocation
170 11 redo allocation
170 12 redo allocation
170 13 redo allocation
170 14 redo allocation
170 15 redo allocation
170 16 redo allocation
170 17 redo allocation
170 18 redo allocation
170 19 redo allocation
170 20 redo allocation
24 rows selected.
With heavy processing, lots of redo being generated, there may be a contention over the redo allocation latches. So for this purpose, there was a parameter Log_parallelism( http://download.oracle.com/docs/cd/B10501_01/server.920/a96536/ch1105.htm#REFRN10099) in 9i which could be used to make the redo generation go more faster. This would separate the log buffer into further small buffers. This parameter is only till 9i and from 10.1 onwards its deprecated(http://download.oracle.com/docs/cd/B28359_01/server.111/b28300/changes.htm#UPGRD12575) as in 10g, the mechanism is changed ( and is the actual topic of this post) which handles the excessive redo workload.
So the moral of the story so far comes up is that when we change some thing in the buffer cache, the change is protected in the log buffer. To do this two latches are majorly used. The changes in the log buffer are protected in the physical log files by LGWR.
You must be thinking that that is this the replica of docs or what? Well certainly, it’s the basics and I thought before I forget all this, I must write it down and this talks about the terms which will be coming up a little later in the post. So chill J. Told you its not any new invention but collection .
What’s Wrong With All This
Well actually nothing much. This is some this which is happening since the inception of Oracle I believe. Some things came and go but in nutshell things remained the same. This allocation of latches and other stuff did prove to be fine but as they all good things come to an end, this also needed some modifications. As with the workloads being more heavy and with much bigger databases and higher transaction rates, it becomes too tough to manage the redo allocations using latches. Clearly for the reasons that are well documented in PT guide that with lots of redo, the latches required to copy the redo data in the log buffer come under contention and so does the LGWR process too. There is a workaround to invoke multiple slaves for LGWR for heavy systems but that too is not some thing which is not recommended as long as not fully benchmarked. Surely there are tons of guidelines that are there for users in order to make transactions small, have a good media where redo can stay and so on but they can be applied to a certain level. So there was a need for things to be modified from the grass root level and that’s what happened in 10g(10.1) onwards.
A Strange Output
With the Dirty Buffers discussion given above, you must be clear about the idea about them. So here is small table creation and modification and we would see some dirty buffers getting created,
SQL> select * from V$version;
BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production
PL/SQL Release 11.1.0.6.0 – Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 – Production
NLSRTL Version 11.1.0.6.0 – Production
SQL> alter system flush buffer_cache;
System altered.
SQL> create table t(a number)
2 /
Table created.
SQL> SELECT b.status, object_name, object_type, dirty “Dirty” FROM v$bh b, dba_objects o
2 WHERE b.objd = o.data_object_id AND o.owner = ‘AMAN’ and object_name=’T’;
STATUS OBJECT_NAME
———- ———————————————————————————–
———
OBJECT_TYPE D
——————- –
xcur T
TABLE Y
xcur T
TABLE Y
xcur T
TABLE Y
SQL> insert into t values(1);
1 row created.
SQL> SELECT b.status, object_name, object_type, dirty “Dirty” FROM v$bh b, dba_objects o
2 WHERE b.objd = o.data_object_id AND o.owner = ‘AMAN’ and object_name=’T’;
STATUS OBJECT_NAME
———- ———————————————————————————–
———
OBJECT_TYPE D
——————- –
xcur T
TABLE Y
xcur T
TABLE N
xcur T
TABLE N
xcur T
TABLE Y
xcur T
TABLE N
xcur T
TABLE Y
xcur T
TABLE N
xcur T
TABLE N
8 rows selected.
SQL>
SQL> commit;
Commit complete.
SQL> SELECT b.status, object_name, object_type, dirty “Dirty” FROM v$bh b, dba_objects o
2 WHERE b.objd = o.data_object_id AND o.owner = ‘AMAN’ and object_name=’T’;
STATUS OBJECT_NAME
———- —————————————————————————–
———
OBJECT_TYPE D
——————- –
xcur T
TABLE Y
xcur T
TABLE Y
xcur T
TABLE Y
xcur T
TABLE Y
xcur T
TABLE Y
xcur T
TABLE Y
xcur T
TABLE Y
xcur T
TABLE Y
8 rows selected.
SQL>
Well if you have seen the output with a little care, you must have seen that even when we have given the dmp(insert in this case) there were no buffers which were marked dirty. Oracle did acquire some buffers to keep our changes but they were not marked dirty. But when we issued commit, all the buffers were marked dirty. Why so?
The New SGA Of IMU/ Private Redo Strands
Due to the reasons mentioned above for the allocation and redo latch contentions, from 10.1 onwards, Oracle has introduced private redo allocation areas and In memory Undo.
IMU(In Memory Undo)
With the same reason that Oracle needs to protect redo, it also has to protect Undo to make sure that the block’s old contents are also saved. This would require an access to the blocks and than later on change the transaction table to make the changes logged. The transaction records can be seen from V$transaction. In order to access undo segment header blocks directly, we can also check X$KTUXE and for the base table of V$transaction, X$KTCXB can be used. From oracle 10g onwards, Oracle does use the Undo blocks but makes them available in private memory areas which are specifically created in the Shared Pool. Each pool is containing Undo data of a transaction. Undo data is updated into the segment header but not into the undo blocks. For maintaining them, in the shared pool , memory areas are configured which are used to protect the undo that is coming up. These are controlled by two parameters _in_memory_undo and _imu_pools. Both can be seen from the base tables x$ksppi and x$ksppcv2
SQL> l
1 select KSPFTCTXDVL,kspftctxdf,ksppinm
2 from x$ksppcv2 a,x$ksppi b
3* where a.indx=b.indx and b.ksppinm like ‘%imu%’
SQL> column KSPFTCTXDVL format a10
SQL> /
KSPFTCTXDV KSPFTC KSPPINM
———- —— —————————————————
2 TRUE _simulator_upper_bound_multiple
16 TRUE _simulator_pin_inval_maxcnt
10240 TRUE _simulator_lru_rebalance_thresh
5 TRUE _simulator_lru_rebalance_sizthr
8192 TRUE _simulator_bucket_mindelta
8 TRUE _simulator_lru_scan_count
10 TRUE _simulator_internal_bound
1024 TRUE _simulator_reserved_obj_count
4096 TRUE _simulator_reserved_heap_count
2 TRUE _simulator_sampling_factor
0 TRUE _simulate_mem_transfer
0 TRUE _db_writer_flush_imu
0 TRUE _minimum_giga_scn
1800 TRUE _log_simultaneous_copies
d:\oracle\ FALSE _allow_error_simulation
flash_reco
very_area
SLFLUSH TRUE _gc_policy_minimum
FALSE TRUE _gc_maximum_bids
TRUE TRUE _minimum_db_flashback_retention
600 TRUE _recursive_imu_transactions
TRUE TRUE _smu_error_simulation_site
TRUE TRUE _smu_error_simulation_type
0 TRUE _imu_pools
10 TRUE _minimum_blocks_to_shrink
TRUE TRUE _minimum_extents_to_shrink
RUN_TIME TRUE _plsql_minimum_cache_hit_percent
25 rows selected.
And
SQL> l
1 select KSPFTCTXDVL,kspftctxdf,KSPFTCTXVL,KSPFTCTXIS,ksppinm
2 from x$ksppcv2 a,x$ksppi b
3* where a.indx=b.indx and b.ksppinm like ‘%undo%’
SQL> /
KSPFTCTXDV KSPFTC KSPFTCTXVL KSPFTC KSPPINM
———- —— ———- —— ——————–
FALSE TRUE FALSE FALSE _gc_undo_affinity
AUTO TRUE AUTO FALSE _gc_dissolve_undo_af
finity
TRUE TRUE TRUE FALSE _gc_initiate_undo_af
finity
0 TRUE 0 FALSE _gc_undo_block_disk_
reads
TRUE FALSE _undo_block_compress
ion
TRUE FALSE undo_management
TRUE FALSE undo_tablespace
100 TRUE 100 FALSE _collect_undo_stats
0 TRUE 0 FALSE _undo_debug_mode
0 TRUE 0 FALSE _verify_undo_quota
TRUE FALSE _in_memory_undo
FALSE TRUE FALSE FALSE _undo_autotune
FALSE TRUE FALSE FALSE undo_retention
0 TRUE 0 FALSE _undo_debug_usage
0 TRUE 0 FALSE _smon_undo_seg_resca
n_limit
0 TRUE 0 FALSE _optimizer_undo_cost
_change
TRUE TRUE TRUE FALSE _optimizer_undo_chan
ges
17 rows selected.
Each pool is maintained by a latch and that is called “in memory undo latch”. They can be seen from V$latch_children
SQL> select name from V$latch_children where name like ‘%undo%’;
NAME
—————————————————————-
undo global data
undo global data
undo global data
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
In memory undo latch
21 rows selected.
We can see that in the shared pool, regarding Undo,there are several structures available.
SQL> select pool,name from V$sgastat where pool=’shared pool’ and lower(name) like ‘%undo%’;
POOL NAME
———— ————————–
shared pool UNDO INFO SEGMENTED ARRAY
KCB undo bitvec
Undo Meta Data
Auto tune undo info
UNDO INFO
KTI-UNDO
UNDO INFO HASH
UNDO STAT INFO
The IMU pools are allocated in the shared pool. The data that we change gets copied to these pools from a latch called In memory Undo latch. We can see it here from V$latch,
SQL> select name from V$latch where name like ‘%undo%’;
NAME
—————————————————————-
undo global data
corrupted undo seg lock
In memory undo latch
When we modify the data, oracle doesn’t apply the data to the data blocks immediately but It allocates the data in the IMU pools in the shared pool with the help of this IMU latch. This mechanism avoids the pinning of the Undo segment header block and also to the undo data blocks which are otherwise used immediately when the transaction starts. Now they are only used when the transaction commits. The data is in the pool and from there only it gets copied to the buffer cache. As each pool is assigned to one transaction and is maintained by an individual latch, the mechanism works well to reduce the contention. I am still not clear that with a very very heavy OLTP environment, what would happen? Are the pools increased at that time or not, or some thing else happens, I am not sure.
The IMU pools are also described in the X$ktifp. The size of the pools are 65kb and can be seen as follows,
SQL> select ktifpno,ktifppsi from x$ktifp;
KTIFPNO KTIFPPSI
———- ———-
0 65535
1 65535
2 65535
3 65535
4 65535
5 65535
6 65535
7 65535
8 65535
9 65535
10 65535
KTIFPNO KTIFPPSI
———- ———-
11 65535
12 65535
13 65535
14 65535
15 65535
16 65535
17 65535
18 rows selected.
So there are 18 pools allocated and are of 65kb each in size. This undo data is “flushed” to the undo segments when the pools are full. This write happens in the batch format. There are events when the data in the IMU pools gets flushed. These events are mentioned in the base table x$ktiff. We can see the reasons here,
SQL> l
1 select KTIFFCAT, KTIFFFLC
2* from x$ktiff
SQL> /
KTIFFCAT KTIFFFLC
—————————————- ———-
Undo pool overflow flushes 1
Stack cv flushes 9
Multi-block undo flushes 0
Max. chgs flushes 2
NTP flushes 0
Contention flushes 9
Redo pool overflow flushes 0
Logfile space flushes 0
Multiple persistent buffer flushes 0
Bind time flushes 0
Rollback flushes 335
Commit flushes 1570
Recursive txn flushes 2
Redo only CR flushes 0
Ditributed txn flushes 0
Set txn use rbs flushes 0
Bitmap state change flushes 21
Presumed commit violation 0
Securefile direct-write lob update flush 0
es
19 rows selected.
We can see with simple DML that data is being changed and IMU is being flushed.
In session 1, I made a table,
C:\Documents and Settings\Aman>sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 – Production on Sat Aug 30 11:42:21 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
SQL> create table a as select * from dba_objects;
Table created.
SQL>
And in session 2(the same session from where we saw the events),
SQL> /
KTIFFCAT KTIFFFLC
—————————————- ———-
Undo pool overflow flushes 1
Stack cv flushes 9
Multi-block undo flushes 0
Max. chgs flushes 2
NTP flushes 0
Contention flushes 9
Redo pool overflow flushes 0
Logfile space flushes 0
Multiple persistent buffer flushes 0
Bind time flushes 0
Rollback flushes 338
Commit flushes 1592
Recursive txn flushes 2
Redo only CR flushes 0
Ditributed txn flushes 0
Set txn use rbs flushes 0
Bitmap state change flushes 21
Presumed commit violation 0
Securefile direct-write lob update flush 0
es
19 rows selected.
And in session 1,
SQL> select n.name, s.value from v$statname n, v$sesstat s
2 where n.statistic# = s.statistic#
3 and s.sid = (select sid from v$mystat where rownum = 1)
4 and n.name in (’IMU Flushes’,’IMU commits’);
NAME VALUE
—————————————————————- ———-
IMU commits 0
IMU Flushes 0
SQL>
We do some change in the data in session 1,
SQL> insert into a select * from a;
68864 rows created.
SQL> select n.name, s.value from v$statname n, v$sesstat s
2 where n.statistic# = s.statistic#
3 and s.sid = (select sid from v$mystat where rownum = 1)
4 and n.name in (’IMU Flushes’,’IMU commits’);
NAME VALUE
—————————————————————- ———-
IMU commits 0
IMU Flushes 1
And we have got a flush happened as the data generated is too much big. Lets see in session 2,
SQL> /
KTIFFCAT KTIFFFLC
—————————————- ———-
Undo pool overflow flushes 1
Stack cv flushes 9
Multi-block undo flushes 0
Max. chgs flushes 2
NTP flushes 0
Contention flushes 9
Redo pool overflow flushes 1
Logfile space flushes 0
Multiple persistent buffer flushes 0
Bind time flushes 0
Rollback flushes 353
Commit flushes 1654
Recursive txn flushes 2
Redo only CR flushes 0
Ditributed txn flushes 0
Set txn use rbs flushes 0
Bitmap state change flushes 21
Presumed commit violation 0
Securefile direct-write lob update flush 0
es
19 rows selected.
After some more changes in session 1,
SQL> insert into a select * from a;
137728 rows created.
SQL> where n.statistic# = s.statistic#
SP2-0734: unknown command beginning “where n.st…” – rest of line ignored.
SQL> select n.name, s.value from v$statname n, v$sesstat s
2 where n.statistic# = s.statistic#
3 and s.sid = (select sid from v$mystat where rownum = 1)
4 and n.name in (’IMU Flushes’,’IMU commits’);
NAME VALUE
—————————————————————- ———-
IMU commits 0
IMU Flushes 1
SQL> commit;
Commit complete.
SQL> select n.name, s.value from v$statname n, v$sesstat s
2 where n.statistic# = s.statistic#
3 and s.sid = (select sid from v$mystat where rownum = 1)
4 and n.name in (’IMU Flushes’,’IMU commits’);
NAME VALUE
—————————————————————- ———-
IMU commits 0
IMU Flushes 1
SQL>
We see again session 1,
SQL> /
KTIFFCAT KTIFFFLC
—————————————- ———-
Undo pool overflow flushes 1
Stack cv flushes 9
Multi-block undo flushes 0
Max. chgs flushes 2
NTP flushes 0
Contention flushes 10
Redo pool overflow flushes 1
Logfile space flushes 0
Multiple persistent buffer flushes 0
Bind time flushes 0
Rollback flushes 359
Commit flushes 1688
Recursive txn flushes 2
Redo only CR flushes 0
Ditributed txn flushes 0
Set txn use rbs flushes 0
Bitmap state change flushes 21
Presumed commit violation 0
Securefile direct-write lob update flush 0
es
19 rows selected.
SQL>
This is worth to mention that the same kind of mechanism was introduced by Oracle earlier with the help of Discrete Transaction Management,http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c17trans.htm#421
But this didn’t work out too well as the DTM used to pin the data blocks during the transaction. When the transaction used to get committed than the data was written to the redo logs. In private redo, no pinning of the data blocks is done. Hence they are much better than DTM. DTM is obsolete now.
Private Redo Strands( Private Redo )
Earlier there was a single redo allocation latch available for the entire log buffer. This was a major contention point as till the time one won’t be able to pin the latch, he wont be able to write his changed redo vectors in the log buffer. From 10.1 onwards, the log buffer is broken in to smaller sub-buffers called Private Redo Strands(shown in the x$kcrfstrand). These are also allocated in the shared pool. These all are written by redo allocation latches. There are allocation latches now for each private strand. The data when changed now, doesn’t contend to go into the log buffer as like before but goes into the private redo strands and from there, it goes into the log buffer when the buffer pools become full or when the transaction is committed. We can see the private strands shown in the shared pool from the V$sgastat
SQL> select name,pool from V$sgastat
2 where name like ‘%private%’;
NAME POOL
————————– ————
private strands shared pool
KCB private handles shared pool
At log file switch , when the data is still is in the Private redo strands, lgwr may wait to get the flush done by DBWR. Hence the event, log file switch (private strand flush incomplete) may occur. See the doc note for it,
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3117
This is controlled by the parameter _log_parallelism which is a dynamic and hidden parameter in 10g. The value depends on the cpu_count parameter. Oracle starts with 2 strands.
The benefit of private redo strand is also for the same reason that this would make the LGWR less busy and also the contention over the redo allocation latches will decrease.
Reference:
www.tanelpoder.com/files/Oracle_Performance_Scalability_10g_11g.pdf
http://www.fortunecity.com/skyscraper/oracle/699/orahtml/hayden/discrete.html
wow aman sir, this is really nice..and most that IMU part..damn gud
Thanks Vikrant!
Aman….
Hi sir,
I am confused at DBWR…
DBWR used to write changed blocks from db_buffer_cache in memory to datafiles on disk, right..
LGWR used to write redo log data from log_buffer to online log files on disk.
They are not connected,,, right ??
Then, why LGWR has to wait for DBWR to perform a flsuh before log switch can occur, ???
Furqan,
Well, in the very first place, its not “used to be”, still the same mechanism follows.
About the part that whether the dbwr and lgwr are connected or not, yes , they indeed are! That’s why there communicate through IPC protocol. Think about this, if they won’t, how would it be possible to make the entries across the caches consistent?
Where did I say that Lgwr has to wait for DBWR before a log switch can happen?
Cheers
Aman….
Hi Aman,
Hope you remember me from Oracle database 11g Performance Tuning class in Bangalore.
I was going through your above blog and got stuck with a doubt. Here it goes;
You have described IMU pools in shared_pool in such a way that implies that UNDO RECORDS are held in it. As far as my knowledge goes, redo change vectors pertaining to the UNDO records are kept in the IMU pools but not the UNDO records itself.
Please clarify.
Thanks,
VINEET
Hi Aman,
I somehow feel that your reply to Furqan is not very clear and crisp. If you could elaborate more on the interaction between LGWR and DBWR in newer versions of Oracle database !
Thanks,
VINEET
Vineet,
redo change vectors pertaining to the UNDO records are kept in the IMU pools but not the UNDO records itself.
No, that’s incorrect. The biggest difference in the IMU and the traditional Undo management is that in the later, the Undo for a transaction is maintained in the Undo block contained within the Undo tablespace. In the IMU, it’s the same Undo entry only that’s stored but rather than a block , in a data structure in the shared pool that’s contained with the In Memory Undo Pools. The benefit is that if you want to access the Undo data, it’s going to be in the memory itself in the shared pool only.
Hope that helps.
Aman….
PS: I am sorry but I can’t recall you.
Vineet,
I think I did reply what was asked. DBWR and LGWR don’t work in isolation but together. Still, I shall see if I can find some time and write about it.
Aman….