Yet most developers still seem to be using
DBMS_OUTPUT.PUT_LINE()
for these tasks. DBMS_OUTPUT.PUT_LINE()
has one virtue - ease of implementation - but it has several drawbacks:- It requires an interactive session like SQL*Plus to view;
- It produces no permanent output (without a spool command);
- It may contaminate an application that needs to output information to the actual user;
- We only get the output once the PL/SQL program has completed;
- It doesn't work with inside a FUNCTION;
- There's that pesky 1000000 character buffer limit.
So I think we can agree that
DBMS_OUTPUT.PUT_LINE()
is not suitable for anything other than the most cursory of debugging tasks. Yet, many people have nothing else in their toolkit. What must be done?
What are the characteristics of a good logging solution?
Persistence
The log must be permanent. At the minimum this means logging to a file or a table, but it is nice to have the option of directing output to other destinations such as
syslog
.Structure
The log must be more than a free text field. At the very least we need to be able to search on Who, What and When.
Message levels
We need to be able to distinguish different types of message. The obvious ones are ERROR, INFO and DEBUG, but it is useful to be able to define custom levels.
Filtering
We need to be able to enable the logging by message levels. Typically we would only want to run at DEBUG level in a development environment. In production we would normally log only ERROR and certain types of INFO messages. However, we must be able to change this behaviour easily, so if we want to run production with DEBUG on, we can. Nice to haves include the ability to direct output for different message levels to different destinations and the ability to log at different levels for different subsystems.
Robustness
It is crucial that the logging doesn't fail. If it does fail it must at least report it failure. Furthermore it mustn't raise an exception and cause the program it's logging to fail.
Acceptable impact
We want all this and a small performance hit. Inevitably logging will consume CPU cycles, but provided this is kept to a reasonable level the price is worth the asking, as the benefits of logging are plentiful.
Interface
The public face of the logging solution ought to conceal the workings of the program from the programs that use it. Of course, this is just standard good practice, but it's particularly crucial with something like logging as it is used throughout our application. We want to be able to tweak the logging code without having to rewrite every single application.
Enter stage left LOG4PLSQL
The project I am currently working on consists largely of a fully automated data warehouse: the user interaction is limited to the data mart on the front. The rest of it is long running batch jobs. Logging is crucial: indeed, there is no other feedback for the success, failure or progress of a job. Although I had some scraps of code for logging errors I lacked a full Monty logging implementation that I needed here. So I turned to
LOG4PLSQL
, an open source package. The LOG4PLSQL developer Guillaume Moulard was inspired by the open source java tool log4j
, which is as close to an industry standard as you can get. So much so that even though Sun has introduced its own logging package in JDK1.4 I suspect most projects still use log4j
.Let's assess
LOG4PLSQL
under those headings...Persistence: 8.5/10
LOG4PLSQL
consists on a table with a PL/SQL package acting as an API. It also provides hooks to write out to the Alert Log or to throw a .trc
file. Furthermore it has the ability to integrate with log4j, which adds further options (of which more later). The package uses DBMS_PIPE
to connect to a log4j
session running in the background. The key aspect of the PLOG
API is the CTX
PL/SQL record type, which stores metadata for the logging of the message.Structure: 7.5/10
The TLOG table has columns for user, datetime and context as well as a message text. The datetime is an Oracle DATE column, but it also uses the
V$TIMER
view to get hundredths of a second. It would be neater if it used the TIMESTAMP
data type but the package is meant to be runnable on Oracle 8 installations. I would also like to see TLOG
store the SID. Message levels: 7/10
The package defines five default message FATAL, ERROR, WARN, INFO and DEBUG, plus ALL and NONE as the outer bounds logging. These are actually constants, mapping to numeric values. Despite being "inspired by" log4j LOG4PLSQL has a different scale from that tool. It runs from 0=NONE to 70=ALL whereas log4j runs from 10000=ALL to 50000=NONE; this diverge has a key impact when it come to the integration with
log4j
later on.Filtering: 7/10
LOG4PLSQL
has two mechanisms for assessing message levels and deciding what to do with them. Firstly, a PLOG.CTX
can be associated with each message: this allows us to individually specify the treatment of the message - write to alert log, store in table, send to DBMS_OUTPUT
, etc. If no CTX
record is specified the package uses a default record (and we can set the values for that default). Additionally LOG4PLSQL
allows us to set a default logging level: messages of a message type lower than that level are discarded. The problem is that the logging level is hard-coded in a package specification. This is very efficient from a performance point of view but it makes it awkward to alter the level. Indeed, in a production environment most DBAs would be profoundly reluctant to edit a package spec at all, and certainly not without a change request. We could also filter log messages by using
log4j
. However, because the LOG4PLSQL
message levels do not map to the log4j
levels we cannot use log4j
filtering methods to direct different levels of messages to different appenders. Robustness: 9/10
No complaints on this score with regards to normal running. The only problem has been with the installation scripts. The packages are annotated with
PLDOC
comments so that HTML documentation can be generated using the Javadoc
tool. Unfortunately the annotations are not correct, which leads to error messages. So Moutard has provided two empty scripts param.sql
and return.sql
which suppress the error messages but throw errors of their own because some versions of SQL*Plus don't like empty files. The correct solution is to ensure that every line of a PLDOC
comment starts with an asterisk.When using
log4j
we have to be certain that the background instance is running. Otherwise the DBMS_PIPE
gets clogged up and no further logging happens until the pipe is flushed. Acceptable impact: 6.5/10
Writing records to a database is a fairly expensive process; logging TRACE and DEBUG messages to the
TLOG
table can have a severe impact on the performance of PL/SQL code. Also, the checking of the logging level is fairly cumbersome and is an overhead in its own right.Interface: 7/10
The
PLOG
package exposes the workings of LOG4PLSQL
to a degree unsuitable for production use. However, I think this is fair enough: the facade ought to be application specific.Conclusion
So, to sum up,
LOG4PLSQL
offers a fine starting point as a logging package if you don't have one of your own. It has a solid set of functionality and does pretty much everything you would want. There are a couple of niggling things, but as it is open source, we are free to tweak the program to suit ourselves. Having said all that I am now working on my own logging package (a man's got to have a hobby).
LOG4PLSQL
ultimately is engineered (and in some respects over-engineered) to suit Guillaume Moutarde's needs. My needs are somewhat different: particularly I need better performance out of logging to a table, I need to be able to change the logging level dynamically. I want certain types of log message written to V$SESSION_LONGOPS
. I would like to be able to have different logging behaviours for discrete subsystems. I need a better fit with the industry standard tools: log4j
and syslog
. Finally, having read Michael Feather's thoughts on stunted frameworks, what I want is a simple core logging package that is easy to extend for specific applications. My current application consists of a core of (amended) LOG4PLSQL with three packages over it; it's a horrible gnarly mess of application specific stuff (how subsystems are handled) and general stuff (writing out to
syslog
). Fortunately the public interface is quite clean, so (theoretically) it will be a cinch to swap in my new logging package - except I'll probably have left the project by the time I get it working properly....
3 comments:
Test teams can apreciate logging especially if you dump variable values and other status information.
Finding a way to turn it on and off and set levels is a pain if the call is hard coded, fortunately I've only ever instrumented batch-type applications where I could easily change a parameter. One possibility is a user/application/logging table (easy to change and persistent), but then you get into managing all that configuration, ughhh.
I haven't looked at the stuff you referenced but one useful feature I never found time to implement was to switch levels in mid-stream e.g. for a particular event or procedure. Instrumentation is an extremely valuable tool for developing as well as support, but if you know where the problem is likely to be, or where you are up to in the development, the remainder is just uneccessary crud.
What is the current status of your logging package? It sounds promising and I would be very interested in seeing the final result.
Well, how about OraLog project? http://oralog.sourceforge.net
Post a Comment