Transaction log files are one of the core strengths of a DBMS, if you ask me. The ‘D’ in ACID refers to ‘Durability’. What Durability means is that once a transaction is committed, it is not lost, even if the database crashes or the server it is on has to be restarted.
Transaction Logging in DB2
DB2 writes transactions to the tranacation log files while they are still in progress. Every logged action is written to the log buffer. The log buffer is then written out to disk whenever either it becomes full or a transaction is committed (or a couple of other special situations). With many transactions in progress, this means that the active transaction logs on disk contain both committed and uncommitted changes. Part of the information in the transaction logs is what transaction various changes are a part of and whether or not they are committed.
Active Logs
Any uncommitted transactions can only be in the active log files. The active log files are defined in number by the database configuration parameters LOGPRIMARY and LOGSECOND, and in size by the database configuration parameter LOGFILSIZ. On database activation (or first connect), the number of log files defined by LOGPRIMARY are created. If DB2 runs out of room in these log files, then it will allocate additional logfiles up to the number specified by LOGSECOND. This is true for either circular or archive logging.
It is generally a good idea to keep LOGPRIMARY on the smaller side to speed up database activation time, but to also have room there for your normal day-in and day-out transaction logging purposes. Usually LOGPRIMARY on my databases is 14 or fewer, with the LOGFILSZ for midsize IBM Websphere Commerce database working well somewhere between 10000 and 60000. Obviously larger, more active databases need more. I like LOGSECOND somewhere between 50 and 240 to handle those unexpected spikes. Of course all of this has to take into account the size of your active log filesystem, too.
A single transaction cannot use more log space than LOGFILSZ(LOGPRIMARY + LOGSECOND). Large deletes fail when they fill up the available log files. Commit frequently to avoid this. You can also end up with a long idle connection causing logfile saturation – causing the database to be unavailable. See “Managing db2 transaction log files” for more information on log file saturation.
Archive Logs
All OLTP and many other databases should use archive logging.
When archive logging is enabled, you should always choose a location for LOGARCHMETH1, even if it is just a different directory. A different filesystem and even a vendor archive system such as TSM is vastly preferred. But even if you are only using a single filesystem, set it to a different directory. What this does is give you a location where you know you can compress, archive, or delete archived transaction log files without crashing the database. Because if you (or anyone) mess with any active log file, you are pretty likely to crash the database, probably in a pretty nasty way.
Side note: LOGARCHMETH1 was a great parameter when it was added. Before that we had to compile and manage userexit programs, and that was a pain.
An active log is archived whenever it is full, even if it contains transactions that are still being used. However, db2 does not ‘remove’ it from the active path until it contains no active transactions. I say ‘remove’ because it looks to us like it is removed, but actually the file is renamed and reused because that performs better than actually removing a files and creating a new one. Because of this scenario, you can see the same log file in the active and archive paths at the same time.
There are some scenarios, such as data replication or HADR that can cause an archived log file to be archived. Scenarios like this make it generally a bad idea to compress or move/remove every log file as soon as it it archived to disk. They can also require you to make retrieval of logs from a tape library or other location possible from more than one database server.
Log files, whether full or not, may be archived with an ARCHIVE LOG command, a DEACTIVATE DATABASE command or a BACKUP.
LOGBUFSZ
The log buffer size frequently needs to be tuned. The larger it is, the more partial transactions you may lose in a failure – though in most failures partial transactions are rolled back anyway. In a database snapshot, two numbers are reported like this:
$ db2 get snapshot for db on sample |grep "Log pages" Log pages read = 23108 Log pages written = 20513325
If you want to get that through an administrative snapshot view, you can use:
db2 "select log_reads, log_writes from sysibmadm.snapdb with ur" LOG_READS LOG_WRITES -------------------- -------------------- 23108 20513325 1 record(s) selected.
Interestingly enough, I’m not finding this in the mon_get table functions.
Generally, you want Log pages read to be at or near 0, and Log pages written to be a large number. If Log pages read is not 0, you may want to consider increasing your log buffer, but generally you don’t want it much over 1096 or so.
Basically during normal operations we want to mostly avoid log reads.
So, readers, please share with me other tuning that you do around log file size or log buffer size. Is there anything else on transaction log files that you want to learn about?