The Server Labs Blog Rotating Header Image


Dynamically changing log level with Weblogic, log4j, JMX and WLST

Logging is an uninteresting but nevertheless fundamental part of application development. It is useful not just when you are coding an application but in diagnosing problems of any nature once an application passes into production.

Unfortunately, it is not that clear how to do logging well when developing JEE applications that are to be deployed on a Weblogic server. With this article, I hope to clear up some of the confusion or, at the very least, explain a method that works for me!

The article is based on the use of Apache log4j since that is the most commonly used logging library. The article explains:

  • how to configure log4j properly in a JEE app to be run on Weblogic
  • how to dynamically configure the log level using JMX and Weblogic Scripting Tool

The article assumes some familiarity with Apache Maven, Eclipse and Weblogic.

The code is available here, although you may just prefer to download the EAR file which saves you having to compile any code with Maven. The file contains the code for the JEE application, the code for the JMX library developed by The Server Labs and used in this article and the scripts for WLST.

Configuring log4j in Weblogic

Weblogic 10 comes with a modified version of log4j by default so you can use log4j in your application without having to deploy the log4j library in your application. Unfortunately, my experience (in WLS 10.0) is that if you do this, you will not be able to configure log4j for your application because the classloader in which log4j is loaded is one that is separate to that of your application. Any that you include will be ignored.

Instead, what you have to do is the following:

1) Include the log4j JAR in your application. If your app is an EAR (like mine), then the log4j JAR should go in the .ear file (in the root directory or in /APP-INF/lib). In my project, I have used a Maven multi-project setup with 3 projects – 1 for the EAR, 1 for the EJBs and 1 webapp. In this case, I have to add the log4j JAR to the EAR project’s pom.xml:


2) Tell Weblogic to use your version of the log4j library and not it’s own. To do this, you specify the following in the weblogic-application.xml file which is in the EAR (/META-INF/weblogic-application.xml):


This basically says that when the app asks for a class in the package org.apache.log4j or it’s sub-packages, that it should look for it on the app’s classloader and not in the Weblogic server classloader.

3) Configure log4j using

Log4j requires that you configure it in order to receive log messages. I include the following contents in the /APP-INF/classes/ file which is in the EAR file:

log4j.rootLogger=INFO, stdout, file


log4j.appender.stdout.layout.ConversionPattern=[%d] %-5p %c %x - %m%n

log4j.appender.file.Append = false
log4j.appender.file.layout = org.apache.log4j.PatternLayout
log4j.appender.file.MaxFileSize = 20000Kb
log4j.appender.file.MaxBackupIndex = 5
log4j.appender.file.layout.ConversionPattern=[%d] %-5p %c %x - %m%n

This configures log4j to send output both to the console in which Weblogic was started and to a file (/bea/logs/aplicaciones/blog-logging/logging_${weblogic.Name}.log). You will probably want to change the file path to something that works on your machine.

Testing that log4j is working OK

In order to test these changes, download the EAR file and deploy it to a Weblogic server. I tested on Weblogic 10.3 but it should work on any Weblogic 10.x server or later I believe.

Go to the URL http://localhost:7001/logging/test and you should see a very basic HTML page that tells you to check the logs:


Take a look at the console in which weblogic is running (or the Server.out file if you have the output redirected) and you should see output like that shown below:

[2010-04-21 15:29:47,888] WARN  - warn
[2010-04-21 15:29:47,888] INFO  - info
[2010-04-21 15:29:47,888] DEBUG  - debug
[2010-04-21 15:29:47,889] ERROR  - error
[2010-04-21 15:29:47,889] FATAL  - fatal
[2010-04-21 15:29:48,056] WARN  - warn
[2010-04-21 15:29:48,056] INFO  - info
[2010-04-21 15:29:48,056] DEBUG  - debug
[2010-04-21 15:29:48,056] ERROR  - error
[2010-04-21 15:29:48,056] FATAL  - fatal

If you examine the application, you can see that accessing the URL causes a Servlet to get executed. The servlet writes it’s messages to the log (those recorded with the class “”) and invokes an EJB3 session bean which also writes messages to the log (those tagged with “”).

This shows that this log solution works equally for web as for EJB and that there is no interference with the weblogic logging system.

Making it dynamic

This is a great solution when you’re still developing the app because you can change the debug level just by modifying the file and redeploying the app. However, this is not something you can feasibly do in a production environment – not at the companies I’ve worked at anyway – due to the interruption in service this causes to the end users.

Why would you want to change the log level? Well, if you have to diagnose a problem in the application, it often makes sense to increase the log level of either the whole application or a subset of it such that you receive more information which helps you to diagnose the problem. Once you have diagnosed the problem, you return the logging to it’s original level so as not to affect the performance of the application unnecesarily.

What we really want is to be able to change the log level dynamically (i.e. without redeploying the application), ideally using administrator tools (since that is what the people who manage production systems tend to use) – not developer tools.

First I’ll show you how to change the log level and then I’ll explain how it works.

Using WLST and JMX to change the log level

If you have deployed the example application, go to your domain home in a terminal window and execute the setDomainEnv script. The idea is to update your shell with the domain environment variables such as JAVA_HOME, CLASSPATH etc. On my linux box, I execute the following:

$ cd $DOMAIN_HOME/bin
$ source

To check it has worked, run “echo $CLASSPATH” on unix or “echo %CLASSPATH%” on windows and you should see a long classpath.

Once you have set up the environment, execute the script that is included in the /scripts directory of the code ZIP:

java -classpath $CLASSPATH  $JAVA_OPTIONS  weblogic.WLST

You should see the following output:

$ java -classpath $CLASSPATH  $JAVA_OPTIONS  weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

==>Insufficient arguments
==>Syntax: [APP NAME] [LOGGER] [LEVEL]
==> where: APP NAME=context root of the app e.g. xxx for /xxx
==>        LOGGER=log4j logger name
==>   e.g: xxx DEBUG

As the instructions indicate, you have to call the script with the name of the app (which is the context root of the application web), the Log4j logger that you wish to modify (normally a Java package name) and the level that you want to set (TRACE/DEBUG/WARN/INFO/ERROR/FATAL).

In the example below, we set the log level to INFO for the package in the application “logging”, which is what the example application is called. The output is the following:

java -classpath $CLASSPATH  $JAVA_OPTIONS  weblogic.WLST logging ERROR 

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server 'AdminServer' that belongs to domain 'log4j'.

Warning: An insecure protocol was used to connect to the 
server. To ensure on-the-wire security, the SSL port or 
Admin port should be used instead.

Location changed to custom tree. This is a writable tree with No root.
For more help, use help(custom)

changing log level to ERROR for app logging on server t3://localhost:7001
  ... done. New log levels are:

Exiting WebLogic Scripting Tool.

As you can see from the line “New log levels are:{}”, the log level was changed from the default in the file (DEBUG) to ERROR.

If you go to the logging test page again (http://localhost:7001/logging/test), you should see that there are no longer entries for the DEBUG, INFOR or WARN levels:

[2010-04-21 17:47:09,728] ERROR  - error
[2010-04-21 17:47:09,728] FATAL  - fatal
[2010-04-21 17:47:09,728] ERROR  - error
[2010-04-21 17:47:09,728] FATAL  - fatal

So, we can now dynamically change the log level of a logger in our application using an administrator tool (WLST). Pretty cool, huh?

Here I have specified a single logger “” which is a high-level logger in my application. You can only pass 1 logger as arguments to the script but there is nothing to stop you from running the same script different times with different loggers e.g. once to set to DEBUG to get detailed logging of EJB session beans and another time to set org.hibernate to INFO to get more info on what hibernate is doing.

How do I get this to work in my application?

Simple. Include our JMX-logging JAR in your WEB application and add the following lines to your web.xml file:


At startup time, a JMX Mbean will be registered with the Weblogic JMX subsystem. The Mbean will be registered under a name which includes your application name, meaning there will be one MBean per application deployed on the server.

Once deployed, you can use the WLST script discussed in the previous section to interact with the MBean for your application and thereby change the log level of your application.

OK, how does all this really work?

If you look at the source code for the jmx-logging JAR (included in the source code bundle, you will find a class called com.theserverlabs.jmx.ApplicationLogManager. This class implements the interface ApplicationLogManagerMBean, whose methods are JMX MBean-compatible. It provides methods that allow a caller to set the log level for a given log4j logger:

	public void setLogLevel(String logger, String level) {
		Logger l = Logger.getLogger(logger);

and to interrogate log4j for the currently-configured loggers and their levels, returning a map of Strings:

	public Map getLogLevels() {
		HashMap result = new HashMap();
		Enumeration e = Logger.getRootLogger().getLoggerRepository().getCurrentLoggers();
		while (e.hasMoreElements()) {
			Logger l = e.nextElement();
			if (l.getLevel() != null) {
				result.put(l.getName(), l.getLevel().toString());
		return result;

This MBean can be registered with any JMX system, allowing it’s methods to be invoked via JMX calls. In this case, we really want to register it with the Weblogic JMX system when the application starts up. To accomplish this, we use the class com.theserverlabs.jmx.web.JmxServletContextListener, which is registered in the web.xml of the web application:


A modified version of the source code for JmxServletContextListener is shown below which error-handling and logging removed for clarity. It is pretty clear that when Weblogic starts our application and calls the contextInitialized() method, we register a new instance of ApplicationLogManager with the object name “com.theserverlabs.jmx:type=ApplicationLogManager,name=[NAME OF APPLICATION]LogManager”. This name is generated in the getObjectName() method. When Weblogic stops the web application, it calls the contextDestroyed() method which un-registers the instance of ApplicationLogManager from the JMX MBean server.

public class JmxServletContextListener implements ServletContextListener {

	private InitialContext ctx;
	private MBeanServer server;

	    ctx = new InitialContext();
	    server = MBeanServer.class.cast(ctx.lookup("java:comp/env/jmx/runtime"));

	public void contextInitialized(ServletContextEvent sce) {
		server.registerMBean(new ApplicationLogManager(), getObjectName(sce));

	public void contextDestroyed(ServletContextEvent sce) {
		if (server.isRegistered(getObjectName(sce))) {

	private ObjectName getObjectName(ServletContextEvent sce) throws MalformedObjectNameException, NullPointerException { 
		String appName = sce.getServletContext().getContextPath().substring(1);
		return new ObjectName("com.theserverlabs.jmx:type=ApplicationLogManager,name=" + appName + "LogManager");


The WLST script

That’s pretty much all there is to it in terms of application code. The only thing left to discuss is the Weblogic Scripting Tool (WLST) script that modifies the log level. Below is the source code (it is Python code):

def changeServerAppLogLevel(app, logger, level, serverUrl):

    ## connect to the server
    connect("weblogic", "weblogic", url=serverUrl)

    ## go to the custom MBean tree 

    ## go to the place where our app log level mbeans are stored. 
    cd('com.theserverlabs.jmx/com.theserverlabs.jmx:type=ApplicationLogManager,name=' + app + 'LogManager')

    ## horrible code necessary to invoke a Java method that changes the log level
    args = jarray.array([java.lang.String(logger), java.lang.String(level)],java.lang.Object)
    sig = jarray.array(['java.lang.String', 'java.lang.String'],java.lang.String)
    print 'changing log level to ' + level + ' for app ' + app + ' on server ' + serverUrl

    print '  ... done. New log levels are:'

# "main method" of the python script
argslength = len(sys.argv)

if argslength < 3 :

    print '==>Insufficient arguments'
    print '==>Syntax: [APP NAME] [LOGGER] [LEVEL]'
    print '==> where: APP NAME=context root of the app e.g. xxx for /xxx'
    print '==>        LOGGER=log4j logger name'
    print '==>   e.g: xxx DEBUG'


    app = sys.argv[1]
    logger = sys.argv[2]
    level = sys.argv[3]

    # change the log level for the server. if there was more than one server
    # in the cluster, you could add multiple calls here, each with a different 
    # server URL
    changeServerAppLogLevel(app, logger, level, 't3://localhost:7001')

Hopefully the majority of this code will be clear, even if you don’t understand python. The method/function changeServerAppLogLevel() declared at the start modifies the log level for a defined logger in an application running on a specific Weblogic server. To do this it:

  • Connects to the Weblogic server at the specified URL (note that this example uses the hardcoded username and password weblogic/weblogic
  • Moves to the custom JMX MBean tree which is where Weblogic puts all non-Weblogic MBeans
  • Moves to the place where our MBean is stored in the tree
  • Executes the setLogLevel() method of the MBean with the required arguments to change the log level. This translates to an invocation of the ApplicationLogManager.setLogLevel() method.
  • Lists the new log levels

In the “main” method of the script (i.e. everything that is not in a method/function), there is some basic code to make sure the script is properly invoked and, if it is, a call to changeServerAppLogLevel() with the supplied arguments and the hard-coded URL of ‘t3://localhost:7001’. Obviously you should modify this value to reflect your Weblogic environment.

One interesting aspect of using a script to do this is that you can include a call to changeServerAppLogLevel() for each server in your cluster, if you have a multi-node setup. MBeans are published at server level so there is no concept of publishing a cluster-wide MBean (so far as I know anyway).

An interesting extension of this script would be to interrogate a Weblogic Admin server to find all the nodes in the cluster that were up and then execute the code to change the log level against each available node.


Hopefully this article has cleared up any confusion as to how to perform logging with log4j in a JEE app deployed on Weblogic. I think that the ability to modify dynamically the log level for a given log4j logger without having to redeploy the application is extremely useful and hopefully will be of use to people looking to do a similar thing either with Weblogic or any other JEE server that supports JMX.

Configuring Oracle ASM On Amazon EC2

As part of the work we did for running Astrometric Processing on Amazon EC2, we needed to configure an instance of Oracle Enterprise 11g using ASM (Automated Storage Management)

The steps in this post are based on the Oracle supplied AMI ami-7ecb2f17 which includes Oracle 11g 11.06 Enterprise Edition 64bit.

You can use the EC2 tools to launch instances, but for the purposes of this post I will use ElasticFox.

For convention in this post, I will use the following prompt symbols: $$ for your Local PC, and EC2$ for the remote EC2 instance

Step 1: Launch the instance

Launch ami-7ecb2f17 as a large instance. Once it is running you will have to wait a couple of minutes for SSH to be available. When you connect, using Putty or whichever tool you prefer, accept the license and choose a password as shown below.


At the next prompt “Would you like to create a database now”, select no.

Before you proceed, this image has a small problem. The /mnt partition isn’t mounted and as it is, we won’t have enough space to be able to create a new image based on this one.

This is an easy fix adding the following line to the end of /etc/fstab

/dev/sdb   /mnt      ext3    defaults        0 0

and executing the following in a shell

EC2$ mount /mnt

Step 2: Assign an elastic IP

In order for Oracle to work well, it’s better to have a static IP for it, and for that we will assign an Elastic IP.
Using Elastic Fox or the API tools create an Elastic IP e.g. and assign it to the instance

Change the instance hostname to be the public DNS name

EC2$ hostname

Step 3: Prepare ASMLib

Download Oracle ASMLib from the oracle website here making sure you chose the right drivers for this kernel: 2.6.18-
Ah. First problem, there aren’t any for this particular kernel, so we’ll download the nearest ones available which are the 3 drivers for 2.6.18-53.1.13.el5

Once downloaded, copy the rpm’s up to your instance and install them

EC2$ rpm -Uvh    oracleasm-support-2.1.3-1.el5.x86_64.rpm 
EC2$ rpm -Uvh  oracleasm-2.6.18-53.1.13.el5xen-2.0.4-1.el5.x86_64.rpm 

Because of the incompatibility of the drivers the second step will give us an error:

error: Failed dependencies:
kernel-xen = 2.6.18-53.1.13.el5 is needed by oracleasm-2.6.18-53.1.13.el5xen-2.0.4-1.el5.x86_64

We have to force the install and then use the script /usr/lib/oracleasm/oracleasm_debug_link to create a link between the two dependencies. If you have Oracle MetaLink see notes 805535.1 and 4626181.1
Here we need to add a small caveat. If you do the next steps, Oracle won’t support you. I imagine that this will change soon as they are committed to EC2

EC2$ rpm -Uvh --nodeps oracleasm-2.6.18-53.1.13.el5xen-2.0.4-1.el5.x86_64.rpm  
Preparing...                ########################################### [100%]
   1:oracleasm-2.6.18-53.1.1########################################### [100%]
   2:oracleasmlib           ########################################### [100%]

Create the link between the oracleasm packages

EC2$ /usr/lib/oracleasm/oracleasm_debug_link 2.6.18-53.1.13.el5xen 2.6.18-

Now we need to configure ASMlib using /etc/init.d/oracleasm configure

Believe it or not, that’s the hard part done. The rest is much, much easier.

Before we go any further with configuring Oracle we need some persistent storage for ASM. I’m going to attach 5 EBS (Elastic Block Storage) volumes of 100GB each to the instance using devices /dev/sdg through to /dev/sdk.

For each disk we need to create a primary partition for ASM

And then we have to create the disk for ASM


Step 4: Configure the ASM and the Database

Because these are EC2 server images, and therefore don’t have X installed, we have to execute dbca using pre-recorded response files.

EC2$ dbca -silent -responseFile dbca_ASM.rsp

EC2$ dbca -silent -responseFile dbca_AGISDB.rsp

You can download the response file I used for ASM here and the response file for creating the database here

Step 5: Start your databases

EC2$ echo "startup" | sqlplus -s / as sysdba
EC2$ echo "startup" | sqlplus -s / as sysdba
EC2$ echo "AGISDB Started"

Step 6: Fix /etc/inittab

The /etc/inittab generated by oracle doesn’t have the run level in it that Amazon EC2 instances have, run level 4. To fix this edit /etc/inittab and change the following line

h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 


h1:345:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 

Copy /etc/inittab to /etc/inittab.fixed

Add the following to /etc/init.d/dbora

/u01/app/oracle/product/11.1.0/db_1/bin/localconfig delete
/u01/app/oracle/product/11.1.0/db_1/bin/localconfig add
cp /etc/inittab.fixed /etc/inittab
emctl start dbconsole

Optionally start your databases

Final step: Create your new image

From your PC, copy your certificates up to the running instance

$$ scp -i   /*.pem root@ec2-xx-xx-xx-xxx.compute-1.amazonaw

Inside the image, create a new image bundle making sure you exclude the /mnt directory where your certificates are. If you create a public image with your certificates embedded in them, you compromise the security of your images.

EC2$ ec2-bundle-vol -d /mnt  -e /mnt,/apps -k /mnt/pk-.pem -c /mnt/cert-.pem -r x86_64 -u  -p mynewinstance

Upload the bundle to S3 to a suitable place, e.g. myimages

EC2$ ec2-upload-bundle -b myimages -m /mnt/mynewinstance.manifest.xml -a  -s 

Finally, register the image in EC2
$$ ec2-register myimages /mynewinstance.manifest.xml

I think I've included everything. If you have problems, please let me know.

Continuous Integration with Oracle PL/SQL, utPLSQL and Hudson

PL/SQL is not exactly the coolest technology around these days but many businesses rely on it for their day-to-day operation. The fact that it is relatively old and does not mean that you cannot apply Extreme Programming(XP) practices to PL/SQL development. A few years ago, Steven Feuerstein developed utPLSQL, a unit testing framework for PL/SQL, which covers the testing part of XP. In this post, I will show you how to apply Continuous Integration (another XP practice) to PL/SQL development using Hudson.

This article will explain how to construct a Maven project that enables you to do the following executing just a single maven command:

  • Deploy the project database schema to an Oracle database
  • Install the project PL/SQL pacakges to an Oracle database
  • Insert data specified in SQL files into the tables in the database
  • Install the utPLSQL test pacakges to an Oracle database
  • Run the utPLSQL unit tests

UPDATE: Since this article was written, we have open-sourced the Maven utPLSQL plugin.

Additionally, we will configure a Hudson project that performs all of the above when you submit a change to the source code repository (Subversion in this case) and generates a visual report showing the test results. If any test failed, Hudson can be configured to send an email to the project developers so that they can immediately investigate and fix the problem, in line with common continuous integration practices.

The Oracle database

This post assumes that you have an Oracle database available. I used an Oracle 10g XE instance running on localhost to develop this post. XE is free to download and is sufficient for development purposes.

I created a user called “testing” with password “testing” in the database and also installed the utPLSQL schema, which is a necessary step to following this example.

NOTE: If you install the utPLSQL schema into an Oracle 10g XE database, make sure that you have granted access on UTL_FILE to public as explained here (remember to connect as sysdba when doing this otherwise it won’t work).

A PL/SQL Project in Maven

Maven is a build tool which has a large number of plugins available and is very well integrated with Hudson and so is an ideal choice for contructing our PL/SQL project. If you have an existing project, it is not hard to move it to a maven-compatible structure.

In this post, we will work with a PL/SQL test project. You can download the source code for this project here. Once you’ve downloaded the code, copy it to a folder on your machine and unzip it. The example follows the Maven standard directory layout, which I have customized to suit PL/SQL development:

  • /src/main/plsql/ – PL/SQL source code (packages, functions)
  • /src/test/plsql/ – PL/SQL unit tests (written in utPLSQL)
  • /src/main/sql/ – SQL scripts for creating the schema
  • /src/main/resources/data/ – SQL scripts for inserting schema data

The source code included in the project is a modified version of two examples that come with the utPLSQL package and is not really too important here – other than it is some code for which there exists some utPLSQL tests.

Adding artifacts to the local Maven repository

NOTE: If you have not already installed Maven, follow these instructions.

Maven generally downloads libraries that it needs from the internet. However, there are two libraries which we will use in this post which are not available in any public Maven repository so we will have to install them by hand. One is the Oracle JDBC driver which is not publically available due to Oracle’s licensing restrictions and the second is the plugin that The Server Labs have developed for running utPLSQL tests, which we have not (yet at least) made publically available. Below are the instructions that you must follow to install these two artifacts before continuing:

The Maven utPLSQL plugin developed by The Server Labs

  • Download the plugin here and the required pom.xml file here
  • Execute the following in a command console to install the plugin. Note that in this case I downloaded the plugin to /tmp/maven-utplsql-plugin-10-snapshot1.jar and the pom.xml file to /tmp/pom1.xml. Change this to suite your environment.
    mvn install:install-file -Dfile=/tmp/maven-utplsql-plugin-10-snapshot1.jar -DpomFile=/tmp/pom1.xml

Oracle JDBC driver

  • In the machine on which you have installed Oracle, go to the product/10.2.0/server/jdbc/lib directory of the Oracle Home. On my ubuntu machine with Oracle 10g XE installed, this is /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/jdbc/lib.
  • Copy the ojdbc14.jar file to somewhere convenient on your machine e.g. /tmp
  • Execute the following in a command console to install the JDBC driver in the local Maven repository. Note that you should change /tmp/ojdbc14.jar to the path to which you stored the JAR in the step above.
    mvn install:install-file -Dfile=/tmp/ojdbc14.jar -DartifactId=ojdbc14 -Dversion= -Dpackaging=jar -DgeneratePom=true

Configuring the PL/SQL project

In Maven, build information is contained in the pom.xml file which is in the root of the example project. If you open this file and take a look at it, you will see that there are two maven plugins configured which do most of the work:

Maven SQL Plugin

This plugin installs the database schema, PL/SQL packages, data and PL/SQL test packages in the specified Oracle database when you run the maven test command. It relies on being able to find the PL/SQL packages, SQL files etc. in the places specified in the directory structure above. The database connection details are specified in the main <configuration> element – change these to suit your environment. Although the configuration initially looks quite complicated, it really is just the same thing repeated various times and is easy to understand once you get used to it.









The Maven utPLSQL plugin

This is a plugin written by The Server Labs for executing utPLSQL tests from Maven. It is configured in a similar way to the Maven SQL plugin shown above. You must specify the dependency that the plugin has on the Oracle JDBC driver and the details to connect to the Oracle testing user.

The plugin allows you to specify either <packageName>xxx</packageName> or <testSuiteName>yyy</testSuiteName>. You must specify one or the other but not both. Which you specify depends on whether you want to run a utPLSQL test package or test suite. The example below specifies that we should run the ‘All’ utPLSQL test suite.

When this plugin runs, it runs the utPLSQL test suite or package and retrieves the results from the Oracle database (it looks up the results that utPLSQL stores in tables). The report that it generates is compatible with the unit test reports created by the Maven Surefire plugin.




Building the PL/SQL project

Open a command console and navigate to the folder which contains the downloaded project (the folder with pom.xml in it). Execute the following:

mvn test

You should see some output like this:

[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building PL SQL test project
[INFO]    task-segment: [test]
[INFO] ------------------------------------------------------------------------
[INFO] [resources:resources]
[INFO] Using default encoding to copy filtered resources.
[INFO] [compiler:compile]
[INFO] No sources to compile
[INFO] [resources:testResources]
[INFO] Using default encoding to copy filtered resources.
[INFO] [sql:execute {execution: create-schema}]
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/main/sql/mybooks/schema.sql
[INFO] 3 of 3 SQL statements executed successfully
[INFO] [sql:execute {execution: create-plsql-packages}]
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/main/plsql/betwnstr.sf
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/main/plsql/mybooks/mybooks.pkb
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/main/plsql/mybooks/mybooks.pks
[INFO] 3 of 3 SQL statements executed successfully
[INFO] [sql:execute {execution: insert-data}]
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/main/resources/data/mybooks.sql
[INFO] 6 of 6 SQL statements executed successfully
[INFO] [sql:execute {execution: create-plsql-test-packages}]
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/test/plsql/mybooks/ut_mybooks.pkb
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/test/plsql/mybooks/ut_mybooks.pks
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/test/plsql/suite.sp
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/test/plsql/ut_betwnstr.pkb
[INFO] Executing file: /media/WINDOWS/work/utplsql/plsqltest-svn/src/test/plsql/ut_betwnstr.pks
[INFO] 5 of 5 SQL statements executed successfully
[INFO] [utplsql:execute {execution: run-plsql-test-packages}]
[INFO] using JDBC driver : oracle.jdbc.driver.OracleDriver
[INFO] Running UTPLSQL test suite All
[INFO] Creating test report for run 134
[INFO] Creating test report for run 133
[INFO] [compiler:testCompile]
[INFO] No sources to compile
[INFO] [surefire:test]
[INFO] No tests to run.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3 seconds
[INFO] Finished at: Wed May 13 17:59:08 CEST 2009
[INFO] Final Memory: 16M/295M
[INFO] ------------------------------------------------------------------------

If you connect to your database as the “testing” user, you should see that new tables and packages have been created. Below is an example of how my schema looks in SQL Developer:

Artifacts in SQL Developer

Artifacts in SQL Developer

Adding Continuous Integration with Hudson

Although it is great that we can install the schema and run PL/SQL unit tests with just one Maven command, we can add even more value by building this project in Hudson – a continuous integration system. We will configure Hudson to detect changes to the source code in the Subversion repository, re-installing the database artefacts and running the unit tests each time a change is found. This helps ensure that the source code in the repository always compiles and passes all the unit tests and highlights bugs and integration problems earlier in the development process, when they are cheaper to fix.

Adding the source code to Subversion

This post assumes that you already have a Subversion repository available. If you don’t, you can read how to get and install Subversion here. In the commands below, substitute for the URL to your subversion repository. Our repositories are organised following the recommended subversion repository layout so the source code for the main branch is always in the folder /trunk.

To import the source code into the repository, execute the following commands in a command console on a machine with the subversion client installed, adapting them for your platform and environment. Here I assume that you download and unzipped the source code for the PL/SQL test project to /tmp/plsql-test-project.

svn mkdir
cd /tmp/plsql-test-project/
svn import .

Once you’ve done the import, check that you can see the source code in a web browser by going to

Installing Hudson

Hudson is very easy to install. Go to the Hudson homepage and click on the link to download the latest version of hudson.war. Then, in a console, go to the directory to which you downloaded hudson.war and run the following command:

java -jar hudson.war --httpPort=9080

This starts up Hudson so that it is available on port 9080 instead of the standard 8080, which tends to collide with other applications (for example Oracle XE if you have it installed).

In a web browser, go to http://localhost:9080/ to view the hudson homepage. Follow the steps below to create and configure the Hudson job to build the PL/SQL project:

  • Specify the Maven home in Manage Hudson > Configure System
  • Click on the “New Job” link in the top-left corner
  • Enter a job name and select “build maven2 project”
  • Click OK to create the job and you should be taken to the job configuration page.
  • In the Source Code Management section, click on Subversion
  • Enter the Subversion Repository URL to which you uploaded the project code i.e.
  • Enter the username/password to access Subversion if Hudson prompts you for them
  • In the build triggers section, choose ‘poll scm’ and enter the */10 * * * * in the scheduling box
  • In the ‘Goals and options’ part of the build section, enter “test” (without the quotes)
  • Click on Save

To test the build, click on the “Build Now” link in Hudson. Once the build has finished, you should see a new build in the “Build History” list of builds for the job. It should have a yellow circle next to it, indicating that the build completed ok but that there was a unit test failure. Click on the link for the build and you should see a screen like that below:

Hudson Job Test Failure screen

Hudson Job Test Failure screen

Note that at the centre of the screen, it says “Test Result (1 failure)”. Click on the failure (BETWNSTR.UT_BETWNSTR) to see the error message – EQ “zero start” Expected “abc” and got “ab”. Or, instead, click on the “Test Result” link to see the entire test report. This should show that 16 tests were executed, with 1 failure. These are all the tests that are included in the “All” utPLSQL package.

If we configure an email server in Hudson (out of the scope of this post), we can receieve email notifications if the build fails, either because of a construction problem or because there is a unit test failure.

Fixing the test failure

To get rid of the test failure, we will modify the project source code. Firstly, we must check it out from Subversion, using the commands below (which you should customize to your environment):

mkdir /tmp/plsql-checkout
cd /tmp/plsql-checkout
svn co .

Edit line 46 of the file src/test/plsql/ut_betwnstr.pkb and change the string “abc” to “ab”. Then check the file into Subversion using the following command:

svn ci src/test/plsql/ut_betwnstr.pkb -m "fixed bug"

At this point you can either wait up to 10 mins for Hudson to detect your change in Subversion or, if you are impatient (like me!) you can click on the “build now” link in the Hudson job to force Hudson to pick up the change in Subversion and rebuild the project. When it excecutes the tests, there should now be no failures.


This is a long post with a lot of steps in it but most of them are setup-related. Once you have this set up, you’ll see that it is really easy to do unit testing and continuous integration with PL/SQL. This should help reduce integration time and increase the quality of the code deployed. It’s also worth mentioning that once you start using Hudson, you can take advantage of the vast number of Hudson plugins available. For example, you could update your bug and issue tracking system automatically when you check in a fix for a bug via one of the Hudson plugins.