The Server Labs Blog Rotating Header Image


Creating Sonar Reports from Hudson


In order to guarantee the quality of software development projects, it is important to be able to verify that a continuous integration build meets a minimum set of quality control criteria. The open source project Hudson provides the popular continuous integration server we will use throughout our example. Similarly, Sonar is a lead open source tool providing a centralized platform for storing and managing this type of quality control indicators. By integrating Sonar with Hudson, we’re able to extract and verify quality control metrics stored by Sonar in automated and recurrent manner from Hudson. By verifying these quality metrics we can qualify a given build as valid from a quality perspective, and quickly flag down builds where violations occur. At the same time, it will be very useful to generate summaries of key quality metrics in an automated manner, informing interested parties with a daily email.

Installing Hudson

As a first step, you will need to download and install Hudson from

Installing the Groovy Postbuild Plugin

In order to be able to extend Hudson with custom Groovy-based scripts, we will use the Groovy Postbuild Plugin. To install this plugin, you will have to click on Manage Hudson followed by Manage Plugins, as shown below:

You will then have to select the Available tab at the top, and search for Groovy Postbuild Plugin under the section Other Post-Build Actions.

Sonar Reporting the Groovy Way

Once the Groovy Postbuild Plugin has been successfully installed and Hudson restarted, you can go ahead and download the SonarReports package and extract it to ${HUDSON_HOME}, the home directory of the Hudson server (e.g. the folder .hudson under the user’s home directory on Windows systems). This zip file contains the file SonarReports.groovy under scripts/groovy, which will be created under ${HUDSON_HOME} after expansion.

Hudson Job Configuration

To facilitate reuse of our Hudson configuration for Sonar, we will first create a Sonar Metrics job to be used as a template. We can then create a new job for each project we wish to create Sonar reports for by simply copying this job template.

In the Sonar Metrics job, we first create the necessary parameters that will be used as thresholds and validated by our Groovy script. To this end, we select the checkbox This build is parameterized under the job’s configuration. We then configure the parameters are shown below, where we have provided the corresponding screenshots:

  • projectName: project name that will appear in emails sent from Hudson.
  • sonarProjectId: internal project ID used by Sonar.
  • sonarUrl: URL for the Sonar server.
  • emailRecipients: email addresses for recipients of Sonar metrics summary.
  • rulesComplianceThreshold: minimum percentage of rule compliance for validating a build. A value of false means this metric will not be enforced.
  • blockerThreshold: maximum number of blocker violations for validating a build. A value of false means this metric will not be enforced.
  • criticalThreshold: maximum number of critical violations for validating a build. A value of false means this metric will not be enforced.
  • majorThreshold: maximum number of major violations for validating a build. A value of false means this metric will not be enforced.
  • codeCoverageThreshold: minimum percentage of code coverage for unit tests for validating a build. A value of false means this metric will not be enforced.
  • testSuccessThreshold: minimum percentage of successful unit tests for validating a build. A value of false means this metric will not be enforced.
  • violationsThreshold: maximum number of violations of all type for validating a build. A value of false means this metric will not be enforced.

Finally, we enable the Groovy Postbuild plugin by selecting the corresponding checkbox under the Post-build Actions section of the job configuration page. In the text box, we include the following Groovy code to call into our script:

sonarReportsScript = "${System.getProperty('HUDSON_HOME')}/scripts/groovy/SonarReports.groovy"
shell = new GroovyShell(getBinding())
println "Executing script for Sonar report generation from ${sonarReportsScript}"
shell.evaluate(new File(sonarReportsScript))

Your Hudson configuration page should look like this:

Generating Sonar Reports

In order to automatically generate Sonar reports, we can configure our Hudson job to build periodically (e.g. daily) by selecting this option under Build Triggers. The job will then execute with the specified frequency, using the default quality thresholds we configured in the job’s parameters.

It is also possible to run the job manually to generate reports on demand at any time. In this case, Hudson will ask for the value of the threshold parameters that will be passed in to our Groovy script. These values will override the default values specified in the job’s configuration. Here is an example:

Verifying Quality Control Metrics

When the Hudson job runs, our Groovy script will verify that any thresholds defined in the job’s configuration are met by the project metrics extracted from Sonar. If the thresholds are met, the build will succeed and a summary of the quality control metrics will appear in the Hudson build. In addition, a summary email will be sent to the recipient list emailRecipients, providing interested parties with information regarding the key analyzed metrics.

On the other hand, if the thresholds are not met, the build will be marked as failed and the metric violation described in the Hudson build. Similarly, an email will be sent out informing recipients of the quality control violation.


This article demonstrates how Hudson can be extended with the use of dynamic programming languages like Groovy. In our example, we have created a Hudson job that verifies quality control metrics generated by Sonar and automatically sends quality reports by email. This type of functionality is useful in continuous integration environments, in order to extend the default features provided by Hudson or Sonar to meet custom needs.

Persistence Strategies for Amazon EC2

At The Server Labs, we often run into a need that comes naturally with the on-demand nature of Cloud Computing. Namely, we’d like to keep our team’s AWS bill in check by ensuring that we can safely turn off our Amazon EC2 instances when not in use. In fact, we’d like to take this practice one step further and automate the times when an instance should be operational (e.g. only during business hours). Stopping an EC2 instance is easy enough, but how do we ensure that our data and configuration persist across server restarts? Fortunately, there are a number of possible approaches to solve the persistence problem, but each one brings its own pitfalls and tradeoffs. In this article, we analyze some of the major persistence strategies and discuss their strengths and weaknesses.

A Review of EC2

Since Amazon introduced their EBS-backed AMIs in late 2009 [1], there has been a great deal of confusion around how this AMI type impacts the EC2 lifecycle operations, particularly in the area of persistence. In this section, we’ll review the often-misunderstood differences between S3 and EBS-backed EC2 instances, which will be crucial as we prepare for our discussion of persistence strategies.

Not All EC2 Instances are Created Equal

An Amazon EC2 instance can be launched from one of two types of AMIs: the traditional S3-backed AMI and the new EBS-backed AMI [2]. These two AMIs exhibit a number of differences, for example in their lifecycle [3] and data persistence characteristics [4]:

Characteristic Amazon EBS Instance (S3) Store
Lifecycle Supports stopping and restarting of instance by saving state to EBS. Instance cannot be stopped; it is either running or terminated.
Data persistence Data persists in EBS on instance failure or restart. Data can also be configured to persist when instance is terminated, although it does not do so by default. Instance storage does not persist on instance shutdown or failure. It is possible to attach non-root devices using EBS for data persistence as needed.

As explained in the table above, EBS-backed EC2 instances introduce a new stopped state, unavailable for S3-backed instances. It is important to note that while an instance is in a stopped state, it will not incur any EC2 running costs. You will, however, continue to be billed for the EBS storage associated with your instance. The other benefit over S3-backed instances is that a stopped instance can be started again while maintaining its internal state. The following diagrams summarize the lifecycles of both S3 and EBS-backed EC2 instances:


Lifecycle of an S3-backed EC2 Instance.


Lifecycle of an EBS-backed EC2 Instance.

Note that while the instance ID of a restarted EBS-backed instance will remain the same, it will be dynamically assigned a new set of public and private IP and DNS addresses. If you would like assign a static IP address to your instance, you can still do so by using Amazon’s Elastic IP service [5].

Persistence Strategies

With an understanding of the differences between S3 and EBS-backed instances, we are well equipped to discuss persistence strategies for each type of instance.

Persistence Strategy 1: EBS-backed Instances

First, we’ll start with the obvious choice: EBS-backed instances. When this type of instance is launched, Amazon automatically creates an Amazon EBS volume from the associated AMI snapshot, which then becomes the root device. Any changes to the local storage are then persisted in this EBS volume, and will survive instance failures and restarts. Note that by default, terminating an EBS-backed instance will also delete the EBS volume associated with it (and all its data), unless explicitly configured not to do so [6].

Persistence Strategy 2: S3-backed Instances

In spite of their ease of use, EBS-backed instances present a couple of drawbacks. First, not all software and architectures are supported out-of-the-box as EBS-backed AMIs, so the version of your favorite OS might not be available. Perhaps more importantly, the EBS volume is mounted as the root device, meaning that you will also be billed for storage of all static data such as operating systems files, etc., external to your application or configuration.

To circumvent these disadvantages, it is possible to use an S3-backed EC2 instance that gives you direct control over what files to persist. However, this flexibility comes at a price. Since S3-backed instances use local storage as their root device, you’ll have to manually attach and mount an EBS volume for persisting your data. Any data you write directly to your EBS mount will be automatically persisted. Other times, configuration files exist at standard locations outside of your EBS mount where you will still want to persist your changes. In such situations, you would typically create a symlink on the root device to point to your EBS mount.

For example, assuming you have mounted your EBS volume under /ebs, you would run the following shell commands to persist your apache2 configuration:

# first backup original configuration
mv /etc/apache2/apache2.conf{,.orig}
# use your persisted configuration from EBS by creating a symlink
ln –s /ebs/etc/apache2/apache2.conf /etc/apache2/apache2.conf

Once your S3-backed instance is terminated, any local instance storage (including symlinks) will be lost, but your original data and configuration will persist in your EBS volume. If you would then like to recover the state persisted in EBS upon launching a new instance, you will have to go through the process of recreating any symlinks and/or copying any pertinent configuration and data from your EBS mount to your local instance storage.

Synchronizing between the data persisted in EBS and that in the local instance storage can become complex and difficult to automate when launching new instances. In order to help with these tasks, there are a number of third-party management platforms that provide different levels of automation. These are covered in more detail in the next section.

Persistence Strategy 3: Third-party Management Platforms

In the early days of AWS, there were few and limited third-party platforms available for managing and monitoring your AWS infrastucture. Moreover, in order to manage and monitor your instances for you, these types of platforms necessarily need access to your EC2 instance keys and AWS credentials. Although a reasonable compromise for some, this requirement could pose an unacceptable security risk for others, who must guarantee the security and confidentiality of their data and internal AWS infrastructure.

Given these limitations, The Server Labs developed its own Cloud Management Framework in Ruby for managing EC2 instances, EBS volumes and internal SSH keys in a secure manner. Our framework automates routine tasks such as attaching and mounting EBS volumes when launching instances, as well providing hooks for the installation and configuration of software and services at startup based on the data persisted in EBS. It even goes one step further by mounting our EBS volumes using an encrypted file system to guarantee the confidentiality of our internal company data.

Today, companies need not necessarily develop their homegrown frameworks, and can increasingly rely on third-party platforms. An example of a powerful commercial platform for cloud management is Rightscale. For several of our projects, we rely on Rightscale to automatically attach EBS volumes when launching new EC2 instances. We also make extensive use of scripting to install and configure software onto our instances automatically at boot time using Rightscale’s Righscript technology [7]. These features make it easy to persist your application data and configuration in EBS, while automating the setup and configuration of new EC2 instances associated with one or more EBS volumes.

Automating Your Instance Uptimes

Now that we have discussed the major persistence strategies for Amazon EC2, we are in a good position to tackle our original use case. How can we schedule an instance in Amazon so that it is only operational during business hours? After all, we’d really like to avoid getting billed for instance uptime during times when it is not really needed.

To solve this problem, we’ll have to address two independent considerations. First, we’ll have to ensure that all of our instance state (including data and configuration) is stored persistently. Second, we’ll have to automate the starting and stopping of our instance, as well as restoring its state from persistent storage at boot time.

Automation Strategy 1: EBS-backed Instances

By using an EBS-backed instance, we ensure that all of its state is automatically persisted even if the instance is restarted (provided it is not terminated). Since the EBS volume is mounted as the root device, no further action is required to restore any data or configuration. Last, we’ll have to automate starting and stopping of the instance based on our operational times. For scheduling our instance uptimes, we can take advantage of the Linux cron service. For example, in order to schedule an instance to be operational during business hours (9am to 5pm, Monday-Friday), we could create the following two cron jobs:

0 9 * * 1-5 /opt/aws/bin/ i-10a64379
0 17 * * 1-/opt/aws/bin/ i-10a64379

The first cron job will schedule the EBS-backed instance identified by instance ID i-10a64379 to be started daily from Monday to Friday at 9am. Similarly, the second job schedules the same instance to be stopped at 5pm Monday through Friday. The cron service invokes the helper scripts and to facilitate the configuration of the AWS command-line tools according to your particular environment. You could run this cron job from another instance in the cloud, or you could have a machine in your office launch it.

The following snippet provides sample contents for, which does the setup necessary to invoke the AWS ec2-start-instances command. Note that this script assumes that your EBS-backed instance was previously launched manually and you know its instance ID.

# Name:
# Description: this script starts the EBS-backed instance with the specified Instance ID
# by invoking the AWS ec2-start-instances command
# Arguments: the Instance ID for the EBS-backed instance that will be started.

export JAVA_HOME=/usr/lib/jvm/java-6-sun-
export EC2_HOME=/opt/aws/ec2-api-tools-1.3
export EC2_PRIVATE_KEY=/opt/aws/keys/private-key.pem
export EC2_CERT=/opt/aws/keys/cert.pem
# uncomment the following line to use Europe as the default Zone
#export EC2_URL=

echo "Starting EBS-backed instance with ID ${INSTANCE_ID}"
ec2-start-instances ${INSTANCE_ID}

Similarly, would stop your EBS-backed instance by invoking ec2-stop-instances followed by your instance ID. Note that the instance ID of EBS-backed instances will remain the same across restarts.

Automation Strategy 2: S3-backed Instances

Amazon instances backed by S3 present the additional complexity that the local storage is not persistent and will be lost upon terminating the instance. In order to persist application data and configuration changes independently of the lifecycle of our instance, we’ll have to rely on EBS. Additionally, we’ll have to carefully restore any persisted state upon launching a new EC2 instance.

The Server Labs Cloud Manager allows us to automate these tasks. Among other features, it automatically attaches and mounts a specified EBS volume when launching a new EC2 instance. It also provides hooks to invoke one or more startup scripts directly from EBS. These scripts are specific to the application, and can be used to restore instance state from EBS, including any appropriate application data and configuration.

If you must use S3-backed instances for your solution, you’ll either have to develop your own framework along the lines of The Server Labs Cloud Manager, or rely on third-party management platforms like Rightscale. Otherwise, EBS-backed instances provide the path of least resistance to persisting your instance data and configuration.

Automation Strategy 3: Rightscale

Rightscale provides a commercial platform with support for boot time scripts (via Righscripts) and automatic attachment of EBS volumes. In addition, Rightscale allows applications to define arrays of servers that grow and shrink based on a number of parameters. By using the server array schedule feature, you can define how an alert-based array resizes over the course of a week [8], and thus ensure a single running instance of your server during business hours. In addition, leveraging boot time scripts and the EBS volume management feature enables you to automate setup and configuration of new instances in the array, while persisting changes to your application data and configuration. Using these features, it is possible to build an automated solution for a server that operates during business hours, and that can be shutdown safely when not in use.


This article describes the major approaches to persisting state in Amazon EC2. Persisting state is crucial to building robust and highly-available architectures with the capacity to scale. Not only does it promote operational efficiency by only consuming resources when a need exists; it also protects your application state so that it if your instant fails or is accidentally terminated you can automatically launch a new one and continue where you left off. In fact, these same ideas can also enable your application to scale seamlessly by automatically provisioning new EC2 instances in response to a growth in demand.


[1] New Amazon EC2 Feature: Boot from Elastic Block Store. Original announcement from Amazon explaining the new EC2 boot from EBS feature.

[2] Amazon Elastic Compute Cloud User Guide: AMI Basics. Covers basic AMI concepts for S3 and EBS AMI types.

[3] The EC2 Instance Life Cycle: excellent blog post describing major lifecycle differences between S3 and EBS-backed EC2 instances.

[4] Amazon Elastic Compute Cloud User Guide: AMIs Backed by Amazon EBS. Learn about EBS-backed AMIs and how they work.

[5] AWS Feature Guide: Amazon EC2 Elastic IP Addresses. An introduction to Elastic IP Addresses for Amazon EC2.

[6] Amazon Elastic Compute Cloud User Guide: Changing the Root Volume to Persist. Learn how to configure your EBS-backed EC2 instance so that the associated EBS volume is not deleted upon termination.

[7] RightScale User Guide: RightScripts. Learn how to write your own RightScripts.

[8] RightScale User Guide: Server Array Schedule. Learn how to create an alert-based array to resize over the course of the week.

HTML5 Websocket vs Flex Messaging

Nearly 18 months ago, I wrote an article on this blog which showed how to develop a simple flex web application using ActiveMQ as a JMS server. It has proved a popular article so there must be a fair amount of interest in this topic.

With all the hype surrounding HTML5 at the moment, I figured I’d try to update the application for the HTML5 world, at the same time seeing how easy it would be to replace Adobe Flex. I’m sure you’re all familiar with the fact that Steve Jobs won’t let Flash (and therefore Flex) onto either the iPhone or iPad, so it’s worth investigating if the alternatives (in this case HTML5 websocket) are really up to scratch.

The example shown in the article uses HTML 5 websocket, Apache ActiveMQ, the Stomp messaging protocol and (indirectly) Jetty 7. It currently only runs in Google Chrome 5 since this is the only browser with support for websocket available as of May 2010.

What is Websocket?

In short, it’s a way of opening up a firewall-friendly bi-directional communications channel with a server. The channel stays open for a long period of time, allowing the server and client web browser to interchange messages without having to do polling which consumes bandwidth and a lot of server resources.

This article describes websocket in much more depth.

Websocket forms part of the (still unapproved) HTML5 specification which all browsers will eventually have to implement.

Updating the Trader app

In order to get the best out of this section, it is probably best to have read the original Flex-JMS article.

Here is the code for the updated trader app. Instead of having a server webapp component with BlazeDS acting as a message proxy and the flex clients, we have a bunch of HTML/CSS/JS files (in the /html folder) and some code to put messages in the JMS queues published by Apache ActiveMQ (in the /server folder).

Setting up Apache ActiveMQ

Apache ActiveMQ now comes with support for Websocket (implemented behind the scenes with Jetty 7). We will use ActiveMQ both as our messaging server and web server in this example, though that is not perhaps the best production configuration.

This article explains how to configure ActiveMQ and Websocket. I will repeat the key instructions here for the sake of simplicity:

  1. Download the latest snapshot of Apache ActiveMQ 5.4 from here and unzip it somewhere on your filesystem that we will call $ACTIVEMQ_HOME
  2. Edit $ACTIVEMQ_HOME/conf/activemq.xml and change the transportConnectors section so that it looks like the example below:
  3. Start ActiveMQ by running bin/activemq from the $ACTIVEMQ_HOME directory. Go to http://localhost:8161/admin/ and log in with the username/password admin/admin to check that everything is working ok.

Configure an install the aplication

Download the trader app code and copy the /html folder to $ACTIVEMQ_HOME/webapps/demo, renaming it to /trader-app-websocket (i.e. the full path should be $ACTIVEMQ_HOME/webapps/demo/trader-app-websocket).

Edit stock-quote.js and note the following section:

    var url = 'ws://localhost:61614/stomp';
    var login = 'guest';
    var passcode = 'guest';
    destination = '/topic/stockQuoteTopic';

The url attribute referes to the URL of the ActiveMQ server. Due to a bug in Chrome running on Ubuntu 9.10, I had to put the IP address of my machine here but if you’re running on another linux flavour, OSX or windows, I would imagine that leaving it as localhost should be OK. The username and password are guest/guest which is standard for ActiveMQ.

Download the latest version of Google Chrome (I have 5.0.375.55 which was released a few days ago) and open the URL http://localhost:8161/demo/trader-app-websocket/. You should see a UI that is similar to the Flex app developed in the original article:


Open up a terminal window and go to the location to which you extracted the /server part of the code download. Run the following (assumes Maven installed);

mvn clean compile exec:java -Dexec.mainClass="com.theserverlabs.flex.trader.JSONFeed"

You should see a whole bunch of stock quote information like that shown below scrolling in the terminal:

{"symbol": "XOM",
"name": "Exxon Mobile Corp",
"low": 61.317410451219146,
"high": 61.56,
"open": 61.56,
"last": 61.317410451219146,
"change": 61.317410451219146}

This Java program is the same as that used in the original post. It generates random stock price information for a variety of stocks and publishes it in the stockQuote topic in ActiveMQ. In this case, it generates JMS text messages which contain data formatted in the JSON format.

If you go back to the Chrome browser window, you should see the stock quotes update. If they don’t update, click on refresh:


This is pretty much exactly the same as how the original Flex application worked. The UI colours etc. are slightly different and I’ve not implemented the functionality for subscribing/unsubscribing from a stock price – but that was just on time grounds, not because it is difficult.

How does it work?

When the browser opens the page, it executes the following code in stock-quote.js, which subscribes to the stock quote service:


    var client, destination;
    var url = 'ws://localhost:61614/stomp';
    var login = 'guest';
    var passcode = 'guest';
    destination = '/topic/stockQuoteTopic';

    client = Stomp.client(url);
    client.connect(login, passcode, onconnect);

Here we use the library provided by Jeff Mesnil which enables us to access ActiveMQ using the Stomp protocol instead of JMS. We use it here because it is simple and cross-platform. There is no way of directly subscribing from JavaScript to a JMS server via JMS because there is no client available.

In the same block of code, you can see the code that we execute when we receive a message:

    var onconnect = function(frame) {
      client.subscribe(destination, function(message) {
        var quote = JSON.parse(message.body);
        $('.' + quote.symbol).replaceWith("" + 
            "" + quote.symbol + "" +  
            "" + + "" + 
            "" + quote.last.toFixed(2) + "" + 
            "" + quote.change.toFixed(2) + "" + 
            "" + quote.high.toFixed(2) + "" + 
            "" + quote.low.toFixed(2) + "" +  

When we receive a message, we parse it using the JSON parser that is part of the JQuery library and then we find the HTML element with class attribute equal to the symbol (e.g. if the symbol is IBM, we look for the HTML element with class=”IBM”) and replace it’s contents with the HTML table row code generated in the method. Simple really.

The rest of the code is just HTML and CSS and is not really that interesting for this article.


It is pretty easy to develop an application that uses Websocket – you only have to look at how little real code there is in this example. I’d say it is as easy as developing the original Flex app so from a development point of view, there’s little to chose between these technologies.

Unfortunately the only browser that currently supports Websocket is Google Chrome (and the implementation is a bit buggy). Other browsers (especially Firefox and Safari) should have this functionality soon though. One of the arguments used in support of Flash/Flex has always been the large installed base. Given that the iPhone and iPad are not part of this installed base, it is questionable as to whether this can still be used as a justification for using Flash/Flex. Sure there aren’t that many browsers that support Websocket but in 6 months time they probably all will and you won’t need any proprietary plugin to access the apps build using them. I’d definitely recommend that people developing internal corporate apps who can force their end users to use a browser with Websocket support take a look at this technology. People developing publically accessible webapps are probably going to have to wait till it is more widely implemented in browsers and provide graceful fallback in the case in which it isn’t.

I should point out that I have an iPhone and use Ubuntu 9.10 64-bit at work and I hate not being able to see content on my iPhone because it has been implemented in Flash and my entire firefox browser often crashes completely due to the 64-bit linux Flash plugin.

I’ve gone from being a fan of Flash/Flex to not being so sure about it. It’s hard to escape the feeling that HTML5 will offer much of the functionality currently offered by Flash/Flex in a short period of time. I think the future of these technologies is going to depend on Adobe innovating and offering stuff that is not possible in HTML5, otherwise there is little reason to use it.

Dynamically changing log level with Weblogic, log4j, JMX and WLST

Logging is an uninteresting but nevertheless fundamental part of application development. It is useful not just when you are coding an application but in diagnosing problems of any nature once an application passes into production.

Unfortunately, it is not that clear how to do logging well when developing JEE applications that are to be deployed on a Weblogic server. With this article, I hope to clear up some of the confusion or, at the very least, explain a method that works for me!

The article is based on the use of Apache log4j since that is the most commonly used logging library. The article explains:

  • how to configure log4j properly in a JEE app to be run on Weblogic
  • how to dynamically configure the log level using JMX and Weblogic Scripting Tool

The article assumes some familiarity with Apache Maven, Eclipse and Weblogic.

The code is available here, although you may just prefer to download the EAR file which saves you having to compile any code with Maven. The file contains the code for the JEE application, the code for the JMX library developed by The Server Labs and used in this article and the scripts for WLST.

Configuring log4j in Weblogic

Weblogic 10 comes with a modified version of log4j by default so you can use log4j in your application without having to deploy the log4j library in your application. Unfortunately, my experience (in WLS 10.0) is that if you do this, you will not be able to configure log4j for your application because the classloader in which log4j is loaded is one that is separate to that of your application. Any that you include will be ignored.

Instead, what you have to do is the following:

1) Include the log4j JAR in your application. If your app is an EAR (like mine), then the log4j JAR should go in the .ear file (in the root directory or in /APP-INF/lib). In my project, I have used a Maven multi-project setup with 3 projects – 1 for the EAR, 1 for the EJBs and 1 webapp. In this case, I have to add the log4j JAR to the EAR project’s pom.xml:


2) Tell Weblogic to use your version of the log4j library and not it’s own. To do this, you specify the following in the weblogic-application.xml file which is in the EAR (/META-INF/weblogic-application.xml):


This basically says that when the app asks for a class in the package org.apache.log4j or it’s sub-packages, that it should look for it on the app’s classloader and not in the Weblogic server classloader.

3) Configure log4j using

Log4j requires that you configure it in order to receive log messages. I include the following contents in the /APP-INF/classes/ file which is in the EAR file:

log4j.rootLogger=INFO, stdout, file


log4j.appender.stdout.layout.ConversionPattern=[%d] %-5p %c %x - %m%n

log4j.appender.file.Append = false
log4j.appender.file.layout = org.apache.log4j.PatternLayout
log4j.appender.file.MaxFileSize = 20000Kb
log4j.appender.file.MaxBackupIndex = 5
log4j.appender.file.layout.ConversionPattern=[%d] %-5p %c %x - %m%n

This configures log4j to send output both to the console in which Weblogic was started and to a file (/bea/logs/aplicaciones/blog-logging/logging_${weblogic.Name}.log). You will probably want to change the file path to something that works on your machine.

Testing that log4j is working OK

In order to test these changes, download the EAR file and deploy it to a Weblogic server. I tested on Weblogic 10.3 but it should work on any Weblogic 10.x server or later I believe.

Go to the URL http://localhost:7001/logging/test and you should see a very basic HTML page that tells you to check the logs:


Take a look at the console in which weblogic is running (or the Server.out file if you have the output redirected) and you should see output like that shown below:

[2010-04-21 15:29:47,888] WARN  - warn
[2010-04-21 15:29:47,888] INFO  - info
[2010-04-21 15:29:47,888] DEBUG  - debug
[2010-04-21 15:29:47,889] ERROR  - error
[2010-04-21 15:29:47,889] FATAL  - fatal
[2010-04-21 15:29:48,056] WARN  - warn
[2010-04-21 15:29:48,056] INFO  - info
[2010-04-21 15:29:48,056] DEBUG  - debug
[2010-04-21 15:29:48,056] ERROR  - error
[2010-04-21 15:29:48,056] FATAL  - fatal

If you examine the application, you can see that accessing the URL causes a Servlet to get executed. The servlet writes it’s messages to the log (those recorded with the class “”) and invokes an EJB3 session bean which also writes messages to the log (those tagged with “”).

This shows that this log solution works equally for web as for EJB and that there is no interference with the weblogic logging system.

Making it dynamic

This is a great solution when you’re still developing the app because you can change the debug level just by modifying the file and redeploying the app. However, this is not something you can feasibly do in a production environment – not at the companies I’ve worked at anyway – due to the interruption in service this causes to the end users.

Why would you want to change the log level? Well, if you have to diagnose a problem in the application, it often makes sense to increase the log level of either the whole application or a subset of it such that you receive more information which helps you to diagnose the problem. Once you have diagnosed the problem, you return the logging to it’s original level so as not to affect the performance of the application unnecesarily.

What we really want is to be able to change the log level dynamically (i.e. without redeploying the application), ideally using administrator tools (since that is what the people who manage production systems tend to use) – not developer tools.

First I’ll show you how to change the log level and then I’ll explain how it works.

Using WLST and JMX to change the log level

If you have deployed the example application, go to your domain home in a terminal window and execute the setDomainEnv script. The idea is to update your shell with the domain environment variables such as JAVA_HOME, CLASSPATH etc. On my linux box, I execute the following:

$ cd $DOMAIN_HOME/bin
$ source

To check it has worked, run “echo $CLASSPATH” on unix or “echo %CLASSPATH%” on windows and you should see a long classpath.

Once you have set up the environment, execute the script that is included in the /scripts directory of the code ZIP:

java -classpath $CLASSPATH  $JAVA_OPTIONS  weblogic.WLST

You should see the following output:

$ java -classpath $CLASSPATH  $JAVA_OPTIONS  weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

==>Insufficient arguments
==>Syntax: [APP NAME] [LOGGER] [LEVEL]
==> where: APP NAME=context root of the app e.g. xxx for /xxx
==>        LOGGER=log4j logger name
==>   e.g: xxx DEBUG

As the instructions indicate, you have to call the script with the name of the app (which is the context root of the application web), the Log4j logger that you wish to modify (normally a Java package name) and the level that you want to set (TRACE/DEBUG/WARN/INFO/ERROR/FATAL).

In the example below, we set the log level to INFO for the package in the application “logging”, which is what the example application is called. The output is the following:

java -classpath $CLASSPATH  $JAVA_OPTIONS  weblogic.WLST logging ERROR 

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server 'AdminServer' that belongs to domain 'log4j'.

Warning: An insecure protocol was used to connect to the 
server. To ensure on-the-wire security, the SSL port or 
Admin port should be used instead.

Location changed to custom tree. This is a writable tree with No root.
For more help, use help(custom)

changing log level to ERROR for app logging on server t3://localhost:7001
  ... done. New log levels are:

Exiting WebLogic Scripting Tool.

As you can see from the line “New log levels are:{}”, the log level was changed from the default in the file (DEBUG) to ERROR.

If you go to the logging test page again (http://localhost:7001/logging/test), you should see that there are no longer entries for the DEBUG, INFOR or WARN levels:

[2010-04-21 17:47:09,728] ERROR  - error
[2010-04-21 17:47:09,728] FATAL  - fatal
[2010-04-21 17:47:09,728] ERROR  - error
[2010-04-21 17:47:09,728] FATAL  - fatal

So, we can now dynamically change the log level of a logger in our application using an administrator tool (WLST). Pretty cool, huh?

Here I have specified a single logger “” which is a high-level logger in my application. You can only pass 1 logger as arguments to the script but there is nothing to stop you from running the same script different times with different loggers e.g. once to set to DEBUG to get detailed logging of EJB session beans and another time to set org.hibernate to INFO to get more info on what hibernate is doing.

How do I get this to work in my application?

Simple. Include our JMX-logging JAR in your WEB application and add the following lines to your web.xml file:


At startup time, a JMX Mbean will be registered with the Weblogic JMX subsystem. The Mbean will be registered under a name which includes your application name, meaning there will be one MBean per application deployed on the server.

Once deployed, you can use the WLST script discussed in the previous section to interact with the MBean for your application and thereby change the log level of your application.

OK, how does all this really work?

If you look at the source code for the jmx-logging JAR (included in the source code bundle, you will find a class called com.theserverlabs.jmx.ApplicationLogManager. This class implements the interface ApplicationLogManagerMBean, whose methods are JMX MBean-compatible. It provides methods that allow a caller to set the log level for a given log4j logger:

	public void setLogLevel(String logger, String level) {
		Logger l = Logger.getLogger(logger);

and to interrogate log4j for the currently-configured loggers and their levels, returning a map of Strings:

	public Map getLogLevels() {
		HashMap result = new HashMap();
		Enumeration e = Logger.getRootLogger().getLoggerRepository().getCurrentLoggers();
		while (e.hasMoreElements()) {
			Logger l = e.nextElement();
			if (l.getLevel() != null) {
				result.put(l.getName(), l.getLevel().toString());
		return result;

This MBean can be registered with any JMX system, allowing it’s methods to be invoked via JMX calls. In this case, we really want to register it with the Weblogic JMX system when the application starts up. To accomplish this, we use the class com.theserverlabs.jmx.web.JmxServletContextListener, which is registered in the web.xml of the web application:


A modified version of the source code for JmxServletContextListener is shown below which error-handling and logging removed for clarity. It is pretty clear that when Weblogic starts our application and calls the contextInitialized() method, we register a new instance of ApplicationLogManager with the object name “com.theserverlabs.jmx:type=ApplicationLogManager,name=[NAME OF APPLICATION]LogManager”. This name is generated in the getObjectName() method. When Weblogic stops the web application, it calls the contextDestroyed() method which un-registers the instance of ApplicationLogManager from the JMX MBean server.

public class JmxServletContextListener implements ServletContextListener {

	private InitialContext ctx;
	private MBeanServer server;

	    ctx = new InitialContext();
	    server = MBeanServer.class.cast(ctx.lookup("java:comp/env/jmx/runtime"));

	public void contextInitialized(ServletContextEvent sce) {
		server.registerMBean(new ApplicationLogManager(), getObjectName(sce));

	public void contextDestroyed(ServletContextEvent sce) {
		if (server.isRegistered(getObjectName(sce))) {

	private ObjectName getObjectName(ServletContextEvent sce) throws MalformedObjectNameException, NullPointerException { 
		String appName = sce.getServletContext().getContextPath().substring(1);
		return new ObjectName("com.theserverlabs.jmx:type=ApplicationLogManager,name=" + appName + "LogManager");


The WLST script

That’s pretty much all there is to it in terms of application code. The only thing left to discuss is the Weblogic Scripting Tool (WLST) script that modifies the log level. Below is the source code (it is Python code):

def changeServerAppLogLevel(app, logger, level, serverUrl):

    ## connect to the server
    connect("weblogic", "weblogic", url=serverUrl)

    ## go to the custom MBean tree 

    ## go to the place where our app log level mbeans are stored. 
    cd('com.theserverlabs.jmx/com.theserverlabs.jmx:type=ApplicationLogManager,name=' + app + 'LogManager')

    ## horrible code necessary to invoke a Java method that changes the log level
    args = jarray.array([java.lang.String(logger), java.lang.String(level)],java.lang.Object)
    sig = jarray.array(['java.lang.String', 'java.lang.String'],java.lang.String)
    print 'changing log level to ' + level + ' for app ' + app + ' on server ' + serverUrl

    print '  ... done. New log levels are:'

# "main method" of the python script
argslength = len(sys.argv)

if argslength < 3 :

    print '==>Insufficient arguments'
    print '==>Syntax: [APP NAME] [LOGGER] [LEVEL]'
    print '==> where: APP NAME=context root of the app e.g. xxx for /xxx'
    print '==>        LOGGER=log4j logger name'
    print '==>   e.g: xxx DEBUG'


    app = sys.argv[1]
    logger = sys.argv[2]
    level = sys.argv[3]

    # change the log level for the server. if there was more than one server
    # in the cluster, you could add multiple calls here, each with a different 
    # server URL
    changeServerAppLogLevel(app, logger, level, 't3://localhost:7001')

Hopefully the majority of this code will be clear, even if you don’t understand python. The method/function changeServerAppLogLevel() declared at the start modifies the log level for a defined logger in an application running on a specific Weblogic server. To do this it:

  • Connects to the Weblogic server at the specified URL (note that this example uses the hardcoded username and password weblogic/weblogic
  • Moves to the custom JMX MBean tree which is where Weblogic puts all non-Weblogic MBeans
  • Moves to the place where our MBean is stored in the tree
  • Executes the setLogLevel() method of the MBean with the required arguments to change the log level. This translates to an invocation of the ApplicationLogManager.setLogLevel() method.
  • Lists the new log levels

In the “main” method of the script (i.e. everything that is not in a method/function), there is some basic code to make sure the script is properly invoked and, if it is, a call to changeServerAppLogLevel() with the supplied arguments and the hard-coded URL of ‘t3://localhost:7001’. Obviously you should modify this value to reflect your Weblogic environment.

One interesting aspect of using a script to do this is that you can include a call to changeServerAppLogLevel() for each server in your cluster, if you have a multi-node setup. MBeans are published at server level so there is no concept of publishing a cluster-wide MBean (so far as I know anyway).

An interesting extension of this script would be to interrogate a Weblogic Admin server to find all the nodes in the cluster that were up and then execute the code to change the log level against each available node.


Hopefully this article has cleared up any confusion as to how to perform logging with log4j in a JEE app deployed on Weblogic. I think that the ability to modify dynamically the log level for a given log4j logger without having to redeploy the application is extremely useful and hopefully will be of use to people looking to do a similar thing either with Weblogic or any other JEE server that supports JMX.

Getting started with JBI4Corba and Apache Servicemix

I’ve recently been investigating the use of the JBI4Corba JBI Binding Component with the Apache ServiceMix Enterprise Service Bus. JBI4Corba permits a JBI compliant ESB to call corba code running outside of the bus (the consumer scenario) and to external code to call endpoints published on the bus via corba (the provider scenario).

Why is this useful? Well, there is quite a lot of corba code out there but new applications are not generally written to use corba technologies. JBI4Corba permits you to make services offered by legacy corba code available to new applications and enables existing corba code to call services published on the ESB. This means you don’t have to rewrite code that uses corba in order to integrate it into an SOA architecture.

This post will show you how to get started with JBI4Corba and Apache ServiceMix, using an example taken from the JBI4Corba integration tests. The idea is to show from “first principles” how to arrive at a simple JBI4Corba solution, which is something that I have failed to find myself on the web.

In this example, we have a simple ‘legacy’ corba interface which just has one method that echos an input parameter String back to the caller. This interface is implemented in a Java corba Servant. We wish to make this corba interface available as a web service and call it using a standard HTTP client – which knows nothing about corba. The diagram below (taken from the JBI4Corba website) indicates how the various components will interact in this example:

Architectural Components of the Example

To get started, download this code from here and unzip it to a folder on your local machine. This example requires you to have Maven and Apache Servicemix installed on your machine. I have versions 2.09 and 3.3.1 of these two products, respectively.

The Corba Code

Once you have downloaded the code, run

mvn install

in the root directory in which you unzipped the code. The provider-simple-servant project contains the existing Corba code. In src/main/idl, you will find the Corba IDL file which expresses the Corba interfaces. In this case, it is a very simple interface:

module it{
	module imolinfo{
		module jbi4corba{
			module test{
				module testprovidersimple{					
						interface Echo {
							string echo(in string msg);

In this project, we use the Maven idlj plugin to generate the Java classes from this Corba IDL file. Below is the pom.xml configuration:


When you run the maven generate-sources goal or a later goal (such as compile or install), idlj generates the Corba Java classes and puts them in the /target/generated-sources/idl directory, which is on the classpath.

The /src/main/java/it/imolinfo/jbi4corba/test/servant/testprovidersimple/ class is a Corba Servant that provides the behaviour for the Echo Corba interface. To run it, we must first be running the Sun Corba ORB daemon, which depends on your OS. Unix:

  orbd -ORBInitialPort 1050 -ORBInitialHost localhost&


  start orbd -ORBInitialPort 1050 -ORBInitialHost localhost

Once this is running ok, we run the Echo server by executing the following Maven command in the /provider-simple-servant directory:

mvn exec:java -Dexec.mainClass="it.imolinfo.jbi4corba.test.servant.testprovidersimple.EchoSimpleImpl" -Dexec.args=""

In another command window, execute the following in the same /provider-simple-servant directory to run the EchoClient that tests that the server is working ok:

mvn exec:java -Dexec.mainClass="it.imolinfo.jbi4corba.test.servant.testprovidersimple.EchoClient" -Dexec.args="-ORBInitialPort 1050 -ORBInitialHost localhost"

When you run the client, you should see output like that below:

[INFO] [exec:java]
Obtained a handle on server object: IOR:000000000000003b49444c3a69742f696d6f6c696e666f2f6a626934636f7262612f746573742f7465737470726f766964657273696d706c652f4563686f3a312e300000000000010000000000000082000102000000000a3132372e302e312e3100881a00000031afabcb0000000020cd9636e200000001000000000000000100000008526f6f74504f410000000008000000010000000014000000000000020000000100000020000000000001000100000002050100010001002000010109000000010001010000000026000000020002
  result: test
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------

If not, your Echo server is not correctly working and you need to fix this before continuing with the rest of the post.

IMPORTANT: Leave the Echo server running – we will need it later on in the post.

Introducing JBI4Corba

Now that we know that the Corba code is working ok, we can create a JBI service unit that uses JBI4Corba to publish this Corba application on the ServiceMix ESB. The configuration to do this is in the provider-simple-jbi4corba-provider project. Unfortunately, there do not seem to be any Maven archetypes for JBI4Corba projects so we won’t look at how to create this project completely from scratch, but we will see the important steps.

You must generate a WSDL that describes your Corba interface so that it can be used with JBI4Corba. This can be done with the IDL to WSDL tool. Download this tool from here, and modify the shell/batch script that is in the /bin directory (if necessary) to point to your JDK installation. Once you have it installed, run the following command in the /provider-simple-jbi4corba-provider/src/main/resources directory, substituting [path-to] for the path where you installed IDL2WSDLTool:

 /[path-to]/bin/IDL2WSDLTool Echo.idl

This generates the Echo.wsdl file which describes the Corba IDL in a WSDL format that JBI4Corba understands. The generation of this file is a bit of a pain in the neck to be honest. You have to generate a properties file which describes the IDL file, such as provider-simple-jbi4corba-provider/src/main/resources/


This file states that there is 1 interface in the IDL file, that it is called “Echo” and that it should be published at address “Echo” in the NameService. The WSDL file generated is Echo.wsdl.

Note that the jbi-services.xml file in /provider-simple-jbi4corba-provider/src/main/resources makes reference to the target namespace (targetNamespace=”http://it.imolinfo.jbi4corba.test.testprovidersimple.Echo”) declared at the top of the Echo.wsdl file, as well as other attrributes contained within that file.



Once you have generated the WSDL file, run

mvn install

in the root of the provider-simple-jbi4corba-provider project to install the final version.

Accessing the Corba code via HTTP Web services

Now we will create a JBI Service Unit that publishes the Endpoint created in the JBI4Corba Service Unit via HTTP web services. To do this, we will use the Apache Servicemix maven archetypes which make it much easier to create JBI service unit projects since they auto-generate the relevant structure and file and create the JBI XML descriptor automatically at runtime. In the directory to which you downloaded the source code, run the following:

mvn archetype:create \
    -DarchetypeGroupId=org.apache.servicemix.tooling \
    -DarchetypeArtifactId=servicemix-http-consumer-service-unit \
    -DgroupId=it.imolinfo.jbi4corba.test-provider-simple \
    -DartifactId=provider-simple-http-consumer \

You should now have a new project called “provider-simple-http-consumer”. Edit the provider-simple-http-consumer/src/main/resources/xbean.xml file and replace the contents with the following code:


This publishes a new HTTP web service at http://localhost:8192/Service/test-provider-simple/ which calls out to the jbi4corba-test:Echo Endpoint established by the provider-simple-jbi4corba-provider JBI service unit.

In the provider-simple-http-consumer/src/main/resources directory, create a new file called EchoSimple.wsdl and paste in the following content:





mvn install

in the root of the project to create and install the JBI Service unit.

Creating the Service Assembly

A JBI Service assembly is a way of packaging up JBI Service units. We will create the Service Assembly maven project using another Servicemix archetype. Run the following in the directory to which you downloaded the source code:

mvn archetype:create \
    -DarchetypeGroupId=org.apache.servicemix.tooling \
    -DarchetypeArtifactId=servicemix-service-assembly \
    -DgroupId=it.imolinfo.jbi4corba.test-provider-simple \
    -DartifactId=provider-simple-service-assembly \

This should create a new maven sub-project called provider-simple-service-assembly which just contains a pom.xml file. Edit this file and remove the Junit dependency. Add the following instead:


Run mvn install in the service assembly project directory and a new file should be created in the provider-simple-service-assembly/target directory called provider-simple-service-assembly-1.0-SNAPSHOT.jar. This is the file we will deploy to ServiceMix.

Deploying to ServiceMix

Download and install Apache ServiceMix 3.3.1 if you have not already done so. I will refer to the installation directory as $SERVICEMIX_HOME. Download Jbi4Corba and copy it into $SERVICEMIX_HOME/hotdeploy directory. Ensure you have a Java JDK installed. Copy the $JDK_HOME/lib/tools.jar file to $SERVICEMIX_HOME/lib/optional.

Start up ServiceMix by running the servicemix or servicemix.bat (depending on your OS) file in the $SERVICEMIX_HOME/bin folder.

Ensure that you still have the EchoServer running from earlier in this post. Now, copy the provider-simple-service-assembly/target/provider-simple-service-assembly-1.0-SNAPSHOT.jar file to the $SERVICEMIX_HOME/hotdeploy directory. You should see a whole bunch of messages on the ServiceMix console but hopefully no (serious) errors.

To test if the service assembly deployed ok, go to http://localhost:8192/Service/test-provider-simple/main.wsdl and you should see the WSDL descriptor for the HTTP web service. Save this WSDL file somewhere on your local machine.

Testing using SoapUI

SoapUI is a very effective tool for testing web services. If you have not already installed it, download and install the standard (free) edition and run it.

Select File > New soapUI project. Enter “test jbi4corba” as the project name and for the Initial WSDL/WADL field, browse for the WSDL file you saved from http://localhost:8192/Service/test-provider-simple/main.wsdl. Ensure the “create requests” option is ticked and click ok. Keep clicking OK till soapUI creates a TestSuite for you. Save the project.

Right-click on the “EchoCorbaPortBinding Test Suite” element. Choose “Launch Test Runner” and select “Launch” from the dialog. The test should complete successfully and you should see the message “message received: ?” in the debug output from the Corba Echo Server, proving that the message is making its way to the Corba code.


You can download the finished code from this article here.

With the aid of the ServiceMix Maven archetypes and a bit of knowledge of JBI4Corba, you can make services offered by legacy Corba applications available to modern Service Oriented Architecture components via an ESB, with all of the advantages that gives.

Check out the rest of the JBI4Corba integration tests, available here for more complex examples.