Saturday, 31 March 2018

Faster rollback processing of very large transactions in Db2 v11.1.3.3

- David Sciaraffa, Software Development Manager – IBM Db2


In database management systems, it is common for periodic batch workloads to perform large quantities of insert, update, or delete operations. These are often referred to as the ‘nightly batch job’. These large batch processing operations can put pressure on database management resources.

In Db2, to satisfy the Durability property of ACID compliance, all insert/update/delete (i/u/d) of data rows are also recorded into the database recovery log files. This log file data may be required later to recover from system failure, interrupt, or for replication purposes. For large batch operations, log file data often spans many log files.

Diagram 1) depicts how large batch operations can generate log records that span multiple log files:

image


Note: Db2 recovery log files exploit Direct I/O and bypass the file system cache in order to optimize write performance. This is imperative given the performance sensitivity of writing log file data, and useful since there is minimal read activity of the recovery log files during database runtime.

As a common best practice, large batch operations should be divided into smaller batches, with explicit COMMIT operations performed after each small batch, to avoid filling the Db2 active log space, and to lessen the amount of work to undo or redo in case of an interrupt or failure. Nevertheless, large batch operations often exist, and can be interrupted, either intentionally by the user or unintentionally as the result of being forced off the database by workload management enforcement settings.

When large batch operations are interrupted or fail, potentially large quantities of data row changes must be rolled back or undone. This requires reading through all the transaction log data that was recorded into the recovery log files since the beginning of the batch job.  Since Db2 recovery log files use Direct I/O during database runtime (not crash-recovery), Db2 does not benefit from file system prefetching and caching when reading back these log-records.

Starting in Db2 Version 11.1.3.3, there is a significant performance improvement for rollback processing during database runtime for large units of work, through the autonomous use of filesystem caching (buffered I/O) when reading recovery log file data.

By autonomously re-opening log files with file system caching (buffered I/O), reading log data from these files benefits from the buffering of the file system cache.

Diagram 2) depicts how rollback processing of large units of work may benefit from buffered I/O when reading transaction log data back, in order to undo data row changes:

image


Internal tests showed significant improvements (up to a 3x!) in rollback processing time for very large transactions.

image


This improvement is available by default starting in v11.1.3.3, no configuration or registry settings need to be enabled.



I hope this information is useful. If you have any thoughts or questions, feel free to submit a comment below.

No comments:

Post a Comment

Translate