The Server Labs Blog Rotating Header Image

Software Engineering

Creating Sonar Reports from Hudson


In order to guarantee the quality of software development projects, it is important to be able to verify that a continuous integration build meets a minimum set of quality control criteria. The open source project Hudson provides the popular continuous integration server we will use throughout our example. Similarly, Sonar is a lead open source tool providing a centralized platform for storing and managing this type of quality control indicators. By integrating Sonar with Hudson, we’re able to extract and verify quality control metrics stored by Sonar in automated and recurrent manner from Hudson. By verifying these quality metrics we can qualify a given build as valid from a quality perspective, and quickly flag down builds where violations occur. At the same time, it will be very useful to generate summaries of key quality metrics in an automated manner, informing interested parties with a daily email.

Installing Hudson

As a first step, you will need to download and install Hudson from

Installing the Groovy Postbuild Plugin

In order to be able to extend Hudson with custom Groovy-based scripts, we will use the Groovy Postbuild Plugin. To install this plugin, you will have to click on Manage Hudson followed by Manage Plugins, as shown below:

You will then have to select the Available tab at the top, and search for Groovy Postbuild Plugin under the section Other Post-Build Actions.

Sonar Reporting the Groovy Way

Once the Groovy Postbuild Plugin has been successfully installed and Hudson restarted, you can go ahead and download the SonarReports package and extract it to ${HUDSON_HOME}, the home directory of the Hudson server (e.g. the folder .hudson under the user’s home directory on Windows systems). This zip file contains the file SonarReports.groovy under scripts/groovy, which will be created under ${HUDSON_HOME} after expansion.

Hudson Job Configuration

To facilitate reuse of our Hudson configuration for Sonar, we will first create a Sonar Metrics job to be used as a template. We can then create a new job for each project we wish to create Sonar reports for by simply copying this job template.

In the Sonar Metrics job, we first create the necessary parameters that will be used as thresholds and validated by our Groovy script. To this end, we select the checkbox This build is parameterized under the job’s configuration. We then configure the parameters are shown below, where we have provided the corresponding screenshots:

  • projectName: project name that will appear in emails sent from Hudson.
  • sonarProjectId: internal project ID used by Sonar.
  • sonarUrl: URL for the Sonar server.
  • emailRecipients: email addresses for recipients of Sonar metrics summary.
  • rulesComplianceThreshold: minimum percentage of rule compliance for validating a build. A value of false means this metric will not be enforced.
  • blockerThreshold: maximum number of blocker violations for validating a build. A value of false means this metric will not be enforced.
  • criticalThreshold: maximum number of critical violations for validating a build. A value of false means this metric will not be enforced.
  • majorThreshold: maximum number of major violations for validating a build. A value of false means this metric will not be enforced.
  • codeCoverageThreshold: minimum percentage of code coverage for unit tests for validating a build. A value of false means this metric will not be enforced.
  • testSuccessThreshold: minimum percentage of successful unit tests for validating a build. A value of false means this metric will not be enforced.
  • violationsThreshold: maximum number of violations of all type for validating a build. A value of false means this metric will not be enforced.

Finally, we enable the Groovy Postbuild plugin by selecting the corresponding checkbox under the Post-build Actions section of the job configuration page. In the text box, we include the following Groovy code to call into our script:

sonarReportsScript = "${System.getProperty('HUDSON_HOME')}/scripts/groovy/SonarReports.groovy"
shell = new GroovyShell(getBinding())
println "Executing script for Sonar report generation from ${sonarReportsScript}"
shell.evaluate(new File(sonarReportsScript))

Your Hudson configuration page should look like this:

Generating Sonar Reports

In order to automatically generate Sonar reports, we can configure our Hudson job to build periodically (e.g. daily) by selecting this option under Build Triggers. The job will then execute with the specified frequency, using the default quality thresholds we configured in the job’s parameters.

It is also possible to run the job manually to generate reports on demand at any time. In this case, Hudson will ask for the value of the threshold parameters that will be passed in to our Groovy script. These values will override the default values specified in the job’s configuration. Here is an example:

Verifying Quality Control Metrics

When the Hudson job runs, our Groovy script will verify that any thresholds defined in the job’s configuration are met by the project metrics extracted from Sonar. If the thresholds are met, the build will succeed and a summary of the quality control metrics will appear in the Hudson build. In addition, a summary email will be sent to the recipient list emailRecipients, providing interested parties with information regarding the key analyzed metrics.

On the other hand, if the thresholds are not met, the build will be marked as failed and the metric violation described in the Hudson build. Similarly, an email will be sent out informing recipients of the quality control violation.


This article demonstrates how Hudson can be extended with the use of dynamic programming languages like Groovy. In our example, we have created a Hudson job that verifies quality control metrics generated by Sonar and automatically sends quality reports by email. This type of functionality is useful in continuous integration environments, in order to extend the default features provided by Hudson or Sonar to meet custom needs.

Intellectual Property (IPR) Management and Monitoring Tools

It seems that every day projects have more and more dependencies on libraries (internal or external) and, of course, many of these depend on other libraries, resulting in a large dependency tree for any given project. How do you know if any of those libraries contain some code which is licensed in a way that is incompatible with your company’s policies e.g. no GPL?

BT (the former British Telecom) apparently didn’t and ended up having to publish all the code used in one of the routers it distributes due to a GPL violation.

To give you an idea of the scale of this problem, doing a quick search of my local Maven repository reveals that it has 1760 JAR files in it. Admittedly not all of these belong to one single project but maybe they are spread out over 20 different projects. It is pretty infeasible to try to manage such a task manually.

Tools like Maven are a great help for managing dependency trees in your project but doesn’t help much with checking the licenses that each dependency uses. The pom.xml file permits the use of a <license> element but it is optional, many libraries either don’t use Maven or don’t specify the license and you have to check compliance manually in any case.

This is where IPR monitoring tools come in. Such tools allow the definition of licensing policies at an organizational level and provide mechanisms to monitor compliance with these policies in software projects, raising alerts on detected violations.

We recently had to take a look at such tools for one of our clients. After studying the market, we discovered that are currently no open-source solutions covering this problem domain, but several commercial tools address the problem of continuous IPR monitoring.

For reference purposes, here is a list of the providers that we discovered:

IPR Management Tool Site
Palamida Compliance Edition
Black Duck Protex
HiSoftware AccVerify
OpenLogic Library or Enterprise Edition

All of these commercial products offer common features:

  • Automated binary and source code analysis with multi-language support (Java, C/C++, C#,
    Visual Basic, Perl, Python, PHP). The analysis is performed against an external proprietary
    database that contains the code of most open-source products.
  • Provide workflows in order to control the IPR of the software projects through the whole
    lifecycle, based on defined licensing policies.
  • Approval/disapproval licensing mechanisms as well as billing of materials for
    software releases summarizing components, licenses, approval status and license/policy
  • Different levels of code fragment recognition to detect reuse of code.
  • User interfaces offering policy management, reporting and dashboard features.
  • Support for integration of code scan in Continuous Integration platforms via command line
    interface execution.

We think that these products are going to become increasingly important as the total number of libraries used in projects shows no sign of decreasing and there will always be a need to protect intellectual property.

Eating our own Dog Food! – The Server Labs moves its Lab to the Cloud!


After all these years dealing with servers, switches, routers and virtualisation technologies we think it´s time to move our lab into the next phase, the Cloud, specifically the Amazon EC2 Cloud.

We are actively working in the Cloud now for different projects, as you´ve seen in previous blog posts. We believe and feel this step is not only a natural one but also takes us in the right direction towards a more effective management of resources and higher business agility. This fits with the needs of a company like ours and we believe it will also fit for many others of different sizes and requirements.
Cloud computing is not only a niche for special projects with very specific needs. It can be used by normal companies to have a more cost effective It infrastructure, at least in certain areas.

In our lab we had a mixture of server configurations, comprising Sun and Dell servers running all kinds of OSs, using VMWare and Sun virtualisation technology. The purpose of our Lab is to provide an infrastructure for our staff, partners and customers to perform specific tests, prototypes, PoC´s, etc… Also, the Lab is our R & D resource to create new architecture solutions.

Moving our Lab to the cloud will provide an infrastructure that will be more flexible, manageable, powerful, simple and definitely more elastic to setup, use and maintain, without removing any of the features we currently have. It will also allow us to concentrate more in this new paradigm, creating advanced cloud architectures and increasing the overall know-how, that can be injected back to customers and the community.

In order to commence this small project the first thing to do was to perform a small feasibility study to identify the different technologies to use inside the cloud to maintain confidentiality and secure access primarily, but also to properly manage and monitor that infrastructure. Additionally, one of the main drivers of this activity was to reduce our monthly hosting cost, so we needed to calculate, based on the current usage, the savings of moving to the cloud.

Cost Study

Looking at the cost for moving to the cloud we performed an inventory of the required CPU power, server instances, storage (for both Amazon S3 and EBS) and the estimated data IO. Additionally, we did an estimation of the volume of data between nodes and between Amazon and the external world.

We initially thought to automatically shutdown and bring up those servers that are only needed during working hours to save more money. In the end, we will be using Amazon reserved instances, that give a even lower per-hour price similar to the one that we would get using on-demand servers.

Based on this inventory and estimations, and with the help of the Amazon Cost Calculator, we reached a final monthly cost that was aprox. 1/3 of our hosting bill!.

This cost is purely considering the physical infrastructure. We need to add on top of this the savings we have on hardware renewal, pure system administration and system installation. Even if we use virtualization technologies, sometimes we´ve had to rearrange things as our physical infrastructure was limited. All these extra costs mean savings on the cloud.

Feasibility Study

Moving to the cloud gives a feeling to most IT managers that they lose control and most importantly, they lose control of the data. While the usage of hybrid clouds can permit the control of the data, in our case we wanted to move everything to the cloud. In this case, we are certainly not different and we are quite paranoid with our data and how would be stored in Amazon EC2. Also, we still require secure network communication between or nodes in the Amazon network and the ability to give secure external access to our staff and customers.

There are a set of open-source technologies that have helped us to materialize these requirements into a solution that we feel comfortable with:

  • Filesystem encryption for securing data storage in Amazon EBS.
  • Private network and IP range for all nodes
  • Cloud-wide encrypted communication between nodes within a private IP network range via OpenVPN solution
  • IPSec VPN solution for external permanent access to the Cloud Lab, as for instance connection of private cloud/network to public EC2 Cloud
  • Use of RightScale to manage and automate the infrastructure
Overview of TSL Secure Cloud deployment

Overview of TSL Secure Cloud deployment

Implementation and Migration

The implementation of our Cloud Lab solution has gone very smoothly and it is working perfectly.
One of the beneficial side effects you get when migrating different systems into the cloud is that it forces you to be much more organised as the infrastructure is very focused on reutilisation and automatic recreation of the different servers.

We have all our Lab images standardized, taking prebuilt images available in Amazon and customising them to include the security hardening, standard services and conventions we have defined. We can in a matter of seconds deploy new images and include them into our Secure VPN-Based CloudLab network ready to be used.

Our new Cloud Lab is giving us a very stable, cost-effective, elastic and secure infrastructure, which can be rebuilt in minutes using EBS snapshots.

Developing applications with SCA and a JBI-Based supporting infrastructure

We have been working with SOA technologies and solutions in the commercial and open-source arena for some years now and I would like to start a new series with this post covering the developments of two mayor standardisation efforts in this area, SCA (Service Component Architecture) and JBI (Java Business Integration).

While for some time SCA and JBI were presented and considered competitors, it is now a quite accepted idea in the industry that these standards cover different standardisation areas. They can be used separately but also used together to get the best of breed solutions.

SCA main benefit is that provides a technology-agnostic generic programming model that decouples the components implementation from their communication, allowing high level of reuse. Applications developed following the SCA model should be able to be deployed without changes in different SCA vendor platforms and following different integration and deployments patterns, depending on project needs. This would help to clearly separate application concerns, allowing developers to focus on services business logic while integration and deployment issues can be handled by architects and integrators.

In the other hand, JBI standarises a Java-based integration infrastructure where components of different vendors can interact in a standard fashion. In many ways, this standard is currently used to implement standarised ESBs and can provide the integration platform where SCA applications can run.

I was especially interested in solutions implementing the mix, offering SCA to provide the level of standardisation at application composition level while using JBI to provide the standard integration and runtime infrastructure, in the form of an Enterprise Service Bus (ESB). Examples of JBI implementations of ESBs are Apache ServiceMix, OpenESB or OW2 PEtALS.

In this area, we can find several efforts, mainly the Eclipse Swordfish project and OW2 PEtALS.
Eclipse Swordfish looks a very promising project, mixing JBI and OSGI to implement a fully distributed Enterprise Service Bus infrastructure where SCA-based applications can be run. However, at this moment SCA support is quite limited. OW2 PEtALS offers also a distributed ESB solution based on JBI only, and has a SCA service engine to run SCA composed applications based on OW2 Frascati. To know more about how this is implemented, have a look to this presentation from PEtALS guys.

So I decided to try and use OW2 PEtALS to run a simple SCA calculator, similar to the Apache Tuscany calculator sample. My objective was to verify the value of developing a SOA solution with SCA, using the integration features of JBI as a mediator ESB, exploring the possibilities of distributed ESB features and the extensibility via new JBI components, as JBI4Corba.

In order to follow this post, it will be helpful that you are familiar with the JBI and SCA concepts.

To support the development I used the latest Eclipse 3.5 Galileo fully loaded with SOA Tools, that include the SCA Tools. These tools provide a nice graphical environment to develop SCA composites as we will see.
Additionally, PEtALS offers a series of Eclipse plugins that make developer´s life easier for creating JBI Service Units (SU) and JBI Service Assemblies (SA). I was nicely surprised to see that they give the user the possibility to setup simple projects or maven projects. Also, it is nice to see the use of maven archetypes all over the place.
As you can imagine, I decided to go the maven way, making my life much easier. They offer a quite good developer manual providing all the information to setup the development environment.

So, the complete list of the required gear is:


We want to deploy a SCA Calculator implemented as Java components and deploy it in the PEtALS ESB (as depicted below), using a SOAP Binding Component to expose the application as a WS to the external world. We will use SOAPUI to test the application.

SCA Calculator exposed as SOAP WS in PEtALS.

SCA Calculator exposed as SOAP WS in PEtALS.

In order to deploy this we need to configure and develop the following artifacts:

  1. Install necessary PEtALS components into the ESB, the SOAP BC and the SCA SE.
  2. Create a JBI Service Unit (SU Provide) containing the SCA composite to be deployed against the SCA SE.
  3. Create a JBI Service Unit (SU Consume) to expose the SCA composite via SOAP WS.
  4. Create a JBI Service Assembly (SA) containing the two SU, ready to be deployed into the ESB.

The complete sources of the article can be found here (Maven Projects).

Install necessary PEtALS components into the ESB

Starting with the quickstart PEtALS distribution makes everything really simple. You need to simply start the bus with the command:

/bin/ -C (Will start in console mode, very handy)

If you are in Windows there is a typo in the documentation and needs to be started with a lowercase -c.

To install the SOAP BC and the SCA SE is as simple as copying the two zip files for the components into the “install” directory of the PETALS_HOME. The components will be automatically deployed into the bus.
You can also deploy any BC, SE or SA using the terminal console or via the web console.

Make sure you read the limitations (e.g. redeployment of SU) and apply the required changes that are currently needed for the SCA Service Engine (i.e. isolatedClassLoader) as described in the documentation of the component

The webconsole will be available in http://localhost:7878/.

Let´s start with the difficult bit creating the SCA SU. The others, the SOAP SU and SA, will be very simple.

Create a JBI Service Unit (SU Provide) containing the SCA composite

Generate Maven Project
We can generate an empty maven Service Unit project in several ways, using maven archetypes from the command line of via the PEtALS Eclipse plugins. I found the Eclipse plugins more complete (see picture below) as they are able to create empty projects already targeted to work as consume/provide SU for existing PEtALS components.

PEtALS Eclipse plugins

PEtALS Eclipse plugins

For this SU, we need to define a new project that deploys into the SCA SE, therefore we need to select “Use PEtALS technical service -> Use SCA“.
You will have to fill some fields like name of the Composite (e.g. “Calculator”), the target namespace (e.g. “”).

This will create a maven project (you need to select it), let´s call the project “su-SCA-Calculator-provide”. You will need to fill the specific pom.xml entries related to the SU (i.e. groupId, modelVersion and version). Also, remove the parent definitions added by default.

Make sure to have a “description” on the pom.xml as it will be used later by the Service assembly to populate some fields.

Create SCA Composite
The wizard has already created the SCA composite, called “Calculator.composite” under “/src/main/jbi” directory. The composites must be defined in this directory so the files will be kept at root level of the SU when building the project, and not included in the jar file created with the SCA artifact code.
To associate a SCA Composite diagram to the xml configuration file, right click in the SCA composite file and select “SCA->Initialize SCA Composite Diagram file”. This will create a “.composite_diagram” file that will be always on sync with the xml configuration and viceversa.

Once the composite is created we can modify it via xml or via the designer. The SCA Composite for the Calculator would look something like this:

Calculator SCA Composite

Calculator SCA Composite

As you can see our composite is very simple and has a CalculatorService which interface is exposed as a service out of the composite and has four references to other composite components, providing the add, subtract, divide and multiply implementations.

The SCA composite configuration file, would look like this:


Note that at the beginning of the file, the CalculatorService interface is promoted as a composite service. The SCA binding defined for this is the frascati JBI binding, which will register the service in the PEtALS bus as an internal JBI endpoint. The CalculatorService references are wired with the rest of the components via “target”.

Create required WSDL files for promoted SCA services

JBI message exchange Model is based on WSDL and as described in the SCA SE documentation, the SU package must contain a WSDL describing for each composite the promoted services. In our case, we need to provide a WSDL for the CalculatorService.
Currently, the WSDL must be provided in document/literal wrapped style and it is not automatically generated. However, there are tools like Java2WSDL of Apache axis that allow us to create it in a simple way.

The example apache axis command to generate the WSDL would be:

java org.apache.axis.wsdl.Java2WSDL  -y WRAPPED -u LITERAL  -l localhost   calculator.CalculatorService

This will generate a WSDL that should be copied in the “/src/main/jbi” directory under the name provided in the composite file, section “frascati:binding.jbi”, in our case “calculator.wsdl”.

We need to do a few changes in the generated WSDL to make it to work:

  1. Change the request message definitions from for instance “addRequest” to “add”, so it complies to wsdl “wrapper” style, having the same name for the input wrapper as the operation name.
  2. Align this change for the operations port type and binding sections.

I don´t include the final WSDL file as it is a bit long. You can find it in the sources package.

Configure jbi.xml file

Everything is ready and we just need to define which services are provided or consumed by this SU. In this case, the SU provides Services, so the jbi.xml file would look like:





For each provided service we need to define the interface-name, service-name and endpoint-name. It is important to match the calculator.wsdl binding definitions with this information. Therefore, interface-name must match the wsdl portype name and the endpoint-name must match the wsdl port..
We also need to inform PEtALS of which is the wsdl file that describes the service provided, and specifically for the SCA component, which is the composite file.

Build the project

The last step to do is to build the project with a simple:

mvn install

Make sure you have included the PEtALS maven repository into your maven configuration, so it will find the required artifacts. This is well described in their development guide.

Create a JBI Service Unit (SU Consume) to expose the SCA composite via SOAP WS

This SU is much simpler. The component just needs the information for the JBI internal service that must consume and it will do the rest for us.

Using again the PEtALS Eclipse plugins, we create a new maven project (“New->Other->Petals->Expose Service from PEtALS->use SOAP”) called “su-SOAP-calculatorService-consume”. The wizard will allow you to define the Service you want to expose, so if you select the SCA SU Eclipse project, the rest of the fields will be automatically populated.

SOAP SU Consume Wizard

SOAP SU Consume Wizard

As previously done in the other SU, you will need to define the specific fields of the pom.xml related to this project, and don´t forget to define a “description” field.

This SU only contains the jbi.xml, defining which service must be exposed (consumed). We need to make sure that the consume section contains the references to the previously defined SCA SU. This would be the required jbi.xml:


All fields are defined at the moment of project creation. The only important thing to make sure is to provide in the “consumes” section, the proper the interface-name, service-name and endpoint-name. These must much the ones those defined in the SCA SU, so the can talk to each other. If you selected the SCA SU Eclipse project in the SOAP SU creation Wizard, all these fields should be already correctly filled.

Once this is done you can build the project with a maven install.

Create a JBI Service Assembly (SA) for deployment

The deployment in the JBI ESB is performed via Service Assemblies, which can contain many SUs, each one bounded to different components (BC, SE, etc…). In our case we have a the SCA SU bounded to the SCA SE Component and the SOAP SU bounded to the SOAP BC Component. This is defined in the jbi.xml of the Service Assembly.

As before, we can use the PEtALS Eclipse plugin to create an empty SA maven project (“New->Other-PEtALS->SA Maven Project”), called “sa-SCA-Calculator”. The wizard will allow as to add the SUs we want into the SA so no more configuration is needed.

Once created, as on previous artifacts we need to be defined in the pom.xml the specific project parameters. Make sure you defined a description.

The SA project defines the SUs to be included via standard maven dependencies. That´s the only configuration step to perform (automatically done by the wizard) and the jbi.xml will be automatically created. The pom.xml of the SA would then look like:

	A description of sa-SCA-Calculator


Run a maven install to build the SA.

Deploy SA into PEtALS

The generated zip file, “”, just needs to be copied to the PETALS_HOME/install directory and the SA will be automatically installed and started.

The PEtALS console should show information regarding the compilation and creation of required classed for the SCA and SOAP SUs:

SCA Calculator deployed in PEtALS

SCA Calculator deployed in PEtALS

Test the Service

The only thing left is to test that everything works. For that I used SOAUP, loading the WSDL from the SOAP WS services page (provided by the PEtALS component in http://localhost:8084/petals/services/CalculatorService?wsdl).

Testing SCA Calculator with SOAPUI

Testing SCA Calculator with SOAPUI

Future Posts

In future posts I will explore transparent deployment of SCA applications across a distributed ESB (such as PEtALS) and the usage of external references to services using Corba (e.g. replacing the add service by a external CORBA based service).
The later will also explore the usage of third party JBI components (as JBI4Corba) into PEtALS, as there is no PEtALS native JBI BC Component for Corba.

The Server Labs open sources their Maven utPLSQL plugin

Following on from my post the other day, I’m very happy to announce that we have released the source code for our Maven utPLSQL plugin under an Apache 2.0 license.

The code (and downloads for the latest version of the plugin) is published on the Google Code website and is available from the following URL:

We hope that the publication of this source code will enable more people to test their PL/SQL code regularly using continuous integration, thereby improving the quality of their code.

If anyone is interested in contributing an enhancement or bug fix, they are more than welcome.