The Server Labs Blog Rotating Header Image

Tools

ALSB/OSB customization using WLST

One of the primary tasks in release management is environment promotion. From development to test or from test to production, environment promotion is a step which should be as much automated as possible.

We can use the service bus MBeans in WLST scripts to automate promotion of AquaLogic/Oracle Service Bus configurations from development environments through testing, staging, and finally to production environments.

Each environment has particularities which may need changes in configuration of the software. These are usually centralized in property files, database tables, environment variables or any other place to facilitate environment promotion.

In AquaLogic/Oracle Service Bus there is the concept of environment values:

Environment values are certain predefined fields in the configuration data whose values are very likely to change when you move your configuration from one domain to another (for example, from test to production). Environment values represent entities such as URLs, URIs, file and directory names, server names, e-mails, and such. Also, environment values can be found in alert destinations, proxy services, business services, SMTP Server and JNDI Provider resources, and UDDI Registry entries.

For these environment values, we have different standard operations

  • Finding and Replacing Environment Values
  • Creating Customization Files
  • Executing Customization Files

However, these operations are limited to the ‘predefined fields whose values are very likely to change’… and what happens if we need to modify one of the considered ‘not very likely’? A different story is whether to consider SAP client connection parameters ‘not very likely’ to change in a environment promotion from test to production…

In order to automate these necessary changes, one option is to modify directly the exported configuration prior to importing it to the destination environment but in our case, we want to maintain the philosophy of the customization after the importing, keeping the exported package untouched. We will try to use a WLST script instead of a customization file, as the later doesn’t satisfy our needs.

The first thing we have to do for using WLST is to add several service bus jar files to the WLST classpath. For example, if we have a Windows platform we add the following at the beginning of wlst.cmd file (I’m sure *nix people will know how to proceed in their case)

For Aqualogic Service Bus 3.0:

SET ALSB_HOME=c:\bea\alsb_3.0
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-api.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-common.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-resources.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-impl.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\..\modules\com.bea.common.configfwk_1.1.0.0.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\..\modules\com.bea.alsb.statistics_1.0.0.0.jar

For Oracle Service Bus 10gR3:

SET ALSB_HOME=c:\bea\osb_10.3
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-api.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-common.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-resources.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\lib\sb-kernel-impl.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\..\modules\com.bea.common.configfwk_1.2.1.0.jar
SET CLASSPATH=%CLASSPATH%;%ALSB_HOME%\..\modules\com.bea.alsb.statistics_1.0.1.0.jar

In our example, we will try to change the HTTP timeout in the normalLoanProcessor business service present in ALSB/OSB examples server.

normalLoanProcessor

normalLoanProcessor configuration

For that, we will first connect to the bus from WLST and open a session using SessionManagementMBean

from com.bea.wli.sb.management.configuration import SessionManagementMBean
connect("weblogic", "weblogic", "t3://localhost:7021")
domainRuntime()
sessionMBean = findService(SessionManagementMBean.NAME, SessionManagementMBean.TYPE)
sessionName = "mysession"
sessionMBean.createSession(sessionName)
mysession

mysession shown in sbconsole

Nothing new until now. Next thing we need is a reference to the component you want to modify. We chose to use a BusinessServiceQuery like:

from com.bea.wli.sb.management.query import BusinessServiceQuery
from com.bea.wli.sb.management.configuration import ALSBConfigurationMBean
bsQuery = BusinessServiceQuery()
bsQuery.setLocalName("normalLoanProcessor") 
bsQuery.setPath("MortgageBroker/BusinessServices")
alsbSession = findService(ALSBConfigurationMBean.NAME + "." + sessionName, ALSBConfigurationMBean.TYPE)
refs = alsbSession.getRefs(bsQuery)
bsRef = refs.iterator().next()

After this we have a reference to the business service we want to modify. Now is when fun begins.

There is an undocumented service bus ServiceConfigurationMBean (not to be confused with old com.bea.p13n.management.ServiceConfigurationMBean) whose description is ‘MBean for configuring Services’.

ServiceConfiguration.mysession as shown in jconsole

Among the different methods, we find one with an interesting name: getServiceDefinition

getServiceDefinition as shown in jconsole

It looks that we can use the getServiceDefinition method with our previous reference to the business service for obtaining exactly what its name states.

from com.bea.wli.sb.management.configuration import ServiceConfigurationMBean
servConfMBean = findService(ServiceConfigurationMBean.NAME + "." + sessionName, ServiceConfigurationMBean.TYPE)
serviceDefinition = servConfMBean.getServiceDefinition(bsRef)

This is the result of printing serviceDefinition variable:


  
    
    
      
      
        NormalLoanApprovalServiceSoapBinding
        http://example.org
      
    
    
      5
    
    
      normal
    
    
      wsdl-policy-attachments
    
  
  
    http
    false
    
      http://localhost:7021/njws_basic_ejb/NormalSimpleBean
    
    
      none
      0
      30
      true
    
    
      
        POST
        0
      
    
  

Surprised? It’s exactly the same definition written in .BusinessService XML files. In fact, the service definition implements XMLObject.

Now it’s time to update the business service definition with our new timeout value (let’s say 5000 milliseconds) using XPath and XMLBeans. We must also take care of defining namespaces in XPath the same way that are defined in .BusinessService XML files.

nsEnv = "declare namespace env='http://www.bea.com/wli/config/env' "
nsSer = "declare namespace ser='http://www.bea.com/wli/sb/services' "
nsTran = "declare namespace tran='http://www.bea.com/wli/sb/transports' "
nsHttp = "declare namespace http='http://www.bea.com/wli/sb/transports/http' "
nsIWay = "declare namespace iway='http://www.iwaysoftware.com/alsb/transports' "
confPath = "ser:endpointConfig/tran:provider-specific/http:outbound-properties/http:timeout"
confValue = "5000"
confElem = serviceDefinition.selectPath(nsSer + nsTran + nsHttp + confPath)[0]
confElem.setStringValue(confValue)

We are almost there. First we update the service.

servConfMBean.updateService(bsRef, serviceDefinition)

Modified mysession shown in sbconsole

And finally, we activate the session (see NOTE) like we would do in bus console.

sessionMBean.activateSession(sessionName, "Comments")

mysession changes shown in sbconsole

Task details of mysession

Updated normalLoanProcessor configuration

With this approach, it could be possible to build a framework that allows to customize ALL fields as needed.

NOTE:
If you get the exception below when activating changes, please update your WebLogic Server configuration as described in Deploy to Oracle Service Bus does not work

Traceback (innermost last):
  File "", line 1, in ?
com.bea.wli.config.deployment.server.ServerLockException: Failed to obtain WLS Edit lock; it is currently held by user weblogic. This indicates that you have either started a WLS change and forgotten to activate it, or another user is performing WLS changes which have yet to be activated. The WLS Edit lock can be released by logging into WLS console and either releasing the lock or activating the pending WLS changes.
        at com.bea.wli.config.deployment.server.ServerDeploymentInitiator.__serverCommit(Unknown Source)
        at com.bea.wli.config.deployment.server.ServerDeploymentInitiator.access$200(Unknown Source)
        at com.bea.wli.config.deployment.server.ServerDeploymentInitiator$1.run(Unknown Source)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
        at weblogic.security.service.SecurityManager.runAs(Unknown Source)
        at com.bea.wli.config.deployment.server.ServerDeploymentInitiator.serverCommit(Unknown Source)
        at com.bea.wli.config.deployment.server.ServerDeploymentInitiator.execute(Unknown Source)
        at com.bea.wli.config.session.SessionManager.commitSessionUnlocked(SessionManager.java:420)
        at com.bea.wli.config.session.SessionManager.commitSession(SessionManager.java:339)
        at com.bea.wli.config.session.SessionManager.commitSession(SessionManager.java:297)
        at com.bea.wli.config.session.SessionManager.commitSession(SessionManager.java:306)
        at com.bea.wli.sb.management.configuration.SessionManagementMBeanImpl.activateSession(SessionManagementMBeanImpl.java:47)
[...]

Creating Sonar Reports from Hudson

Introduction

In order to guarantee the quality of software development projects, it is important to be able to verify that a continuous integration build meets a minimum set of quality control criteria. The open source project Hudson provides the popular continuous integration server we will use throughout our example. Similarly, Sonar is a lead open source tool providing a centralized platform for storing and managing this type of quality control indicators. By integrating Sonar with Hudson, we’re able to extract and verify quality control metrics stored by Sonar in automated and recurrent manner from Hudson. By verifying these quality metrics we can qualify a given build as valid from a quality perspective, and quickly flag down builds where violations occur. At the same time, it will be very useful to generate summaries of key quality metrics in an automated manner, informing interested parties with a daily email.

Installing Hudson

As a first step, you will need to download and install Hudson from http://hudson-ci.org/.

Installing the Groovy Postbuild Plugin

In order to be able to extend Hudson with custom Groovy-based scripts, we will use the Groovy Postbuild Plugin. To install this plugin, you will have to click on Manage Hudson followed by Manage Plugins, as shown below:

You will then have to select the Available tab at the top, and search for Groovy Postbuild Plugin under the section Other Post-Build Actions.

Sonar Reporting the Groovy Way

Once the Groovy Postbuild Plugin has been successfully installed and Hudson restarted, you can go ahead and download the SonarReports package and extract it to ${HUDSON_HOME}, the home directory of the Hudson server (e.g. the folder .hudson under the user’s home directory on Windows systems). This zip file contains the file SonarReports.groovy under scripts/groovy, which will be created under ${HUDSON_HOME} after expansion.

Hudson Job Configuration

To facilitate reuse of our Hudson configuration for Sonar, we will first create a Sonar Metrics job to be used as a template. We can then create a new job for each project we wish to create Sonar reports for by simply copying this job template.

In the Sonar Metrics job, we first create the necessary parameters that will be used as thresholds and validated by our Groovy script. To this end, we select the checkbox This build is parameterized under the job’s configuration. We then configure the parameters are shown below, where we have provided the corresponding screenshots:

  • projectName: project name that will appear in emails sent from Hudson.
  • sonarProjectId: internal project ID used by Sonar.
  • sonarUrl: URL for the Sonar server.
  • emailRecipients: email addresses for recipients of Sonar metrics summary.
  • rulesComplianceThreshold: minimum percentage of rule compliance for validating a build. A value of false means this metric will not be enforced.
  • blockerThreshold: maximum number of blocker violations for validating a build. A value of false means this metric will not be enforced.
  • criticalThreshold: maximum number of critical violations for validating a build. A value of false means this metric will not be enforced.
  • majorThreshold: maximum number of major violations for validating a build. A value of false means this metric will not be enforced.
  • codeCoverageThreshold: minimum percentage of code coverage for unit tests for validating a build. A value of false means this metric will not be enforced.
  • testSuccessThreshold: minimum percentage of successful unit tests for validating a build. A value of false means this metric will not be enforced.
  • violationsThreshold: maximum number of violations of all type for validating a build. A value of false means this metric will not be enforced.

Finally, we enable the Groovy Postbuild plugin by selecting the corresponding checkbox under the Post-build Actions section of the job configuration page. In the text box, we include the following Groovy code to call into our script:

sonarReportsScript = "${System.getProperty('HUDSON_HOME')}/scripts/groovy/SonarReports.groovy"
shell = new GroovyShell(getBinding())
println "Executing script for Sonar report generation from ${sonarReportsScript}"
shell.evaluate(new File(sonarReportsScript))

Your Hudson configuration page should look like this:

Generating Sonar Reports

In order to automatically generate Sonar reports, we can configure our Hudson job to build periodically (e.g. daily) by selecting this option under Build Triggers. The job will then execute with the specified frequency, using the default quality thresholds we configured in the job’s parameters.

It is also possible to run the job manually to generate reports on demand at any time. In this case, Hudson will ask for the value of the threshold parameters that will be passed in to our Groovy script. These values will override the default values specified in the job’s configuration. Here is an example:

Verifying Quality Control Metrics

When the Hudson job runs, our Groovy script will verify that any thresholds defined in the job’s configuration are met by the project metrics extracted from Sonar. If the thresholds are met, the build will succeed and a summary of the quality control metrics will appear in the Hudson build. In addition, a summary email will be sent to the recipient list emailRecipients, providing interested parties with information regarding the key analyzed metrics.

On the other hand, if the thresholds are not met, the build will be marked as failed and the metric violation described in the Hudson build. Similarly, an email will be sent out informing recipients of the quality control violation.

Conclusion

This article demonstrates how Hudson can be extended with the use of dynamic programming languages like Groovy. In our example, we have created a Hudson job that verifies quality control metrics generated by Sonar and automatically sends quality reports by email. This type of functionality is useful in continuous integration environments, in order to extend the default features provided by Hudson or Sonar to meet custom needs.

Human readable JVM GC timestamps

When we are diagnosing problems in a Java (EE or otherwise) application, is often a good idea to check how garbage collection is performing. One of the basic and most unobtrusive actions is to enable garbage collection logging.

As you may know, if we add the following arguments to the java start command…

-Xloggc:<file_name> –XX:+PrintGCDetails -XX:+PrintGCDateStamps

… the JVM will start writing garbage collection messages to the file we set with the parameter -Xlogcc. The messages should be something like:


2010-04-22T18:12:27.796+0200: 22.317: [GC 59030K->52906K(97244K), 0.0019061 secs]
2010-04-22T18:12:27.828+0200: 22.348: [GC 59114K->52749K(97244K), 0.0021595 secs]
2010-04-22T18:12:27.859+0200: 22.380: [GC 58957K->53335K(97244K), 0.0022615 secs]
2010-04-22T18:12:27.890+0200: 22.409: [GC 59543K->53385K(97244K), 0.0024157 secs]

The bold part is simply the date and time when reported garbage collection event starts.

Unfortunately -XX:+PrintGCDateStamps is available only for Java 6 Update 4 and later JVMs. So, if we are unlucky and our application is running on older JVMs we are forced to use…

-Xloggc:<file> –XX:+PrintGCDetails

… and the messages will be like:


22.317: [GC 59030K->52906K(97244K), 0.0019061 secs]
22.348: [GC 59114K->52749K(97244K), 0.0021595 secs]
22.380: [GC 58957K->53335K(97244K), 0.0022615 secs]
22.409: [GC 59543K->53385K(97244K), 0.0024157 secs]

Now, the bold numbers (also present in previous format) are the seconds elapsed from JVM start time.

Mmm… way harder to correlate GC events with information from other log files in this case :/

Wouldn’t it be easier to process the gc log file and calculate date and time from seconds elapsed? It seems so, but seconds elapsed from… when? Or, putting it in other words, where do we extract the JVM startup date and time from?

In order to be as unobtrusive as possible, we should try to calculate the start date and time from the same GC log file. That brings us to the file attributes. We have different options:

Unix Windows
Access time Access time
Change time Creation time
Modify time Modify time

We discard access time (for obvious reasons) and change time and creation time as they are not available in both platforms, so we are left with modification time, which represents the time when the file was last modified.

In Windows, modification time is maintained when the file is copied elsewhere, but in Unix we should use the -p flag to preserve timestamp attributes if we want to copy the GC log file prior to our processing.

The last modification time of the GC log file should match the last timestamp recorded for a GC event in the log file. Well… for the purists, it should match exactly the last elapsed time plus the execution time (both in bold) as each log line is written piece by piece as it executes.


22.409: [GC 59543K->53385K(97244K), 0.0024157 secs]

In our approach, we discard the execution time as we don’t need accurate precision to have a rough idea of what time each garbage collection event occurred. Nevertheless, keep in mind that GC execution time could sometimes be as long as several seconds in large heaps.

When we experienced this situation in a client recently, we needed to quickly develop a simple and portable script, so we used Python for the task. You already knew we don’t do just Java, didn’t you? 😛

#!/usr/bin/env python

import sys, os, datetime

# true if string is a positive float
def validSeconds(str_sec):
    try:
        return 0 < float(str_sec)
    except ValueError:
        return False
                
# show usage                
if len(sys.argv) < 2:
    print "Usage: %s " % (sys.argv[0])
    sys.exit(1)
    
file_str = sys.argv[1]
lastmod_date = datetime.datetime.fromtimestamp(os.path.getmtime(file_str))

file = open(file_str, 'r')
lines = file.readlines()
file.close()

# get last elapsed time
for line in reversed(lines):
    parts = line.split(':')
    if validSeconds(parts[0]):
        break

# calculate start time
start_date = lastmod_date - datetime.timedelta(seconds=float(parts[0]))
  
# print file prepending human readable time where appropiate  
for line in lines:
    parts = line.split(':')
    if not validSeconds(parts[0]):
        print line.rstrip()
        continue
    line_date = start_date + datetime.timedelta(seconds=float(parts[0]))
    print "%s: %s" % (line_date.isoformat(), line.rstrip())

The script output can be redirected to another file, where we’ll have

2010-04-22T18:12:27.796375: 22.317: [GC 59030K->52906K(97244K), 0.0019061 secs]
2010-04-22T18:12:27.828375: 22.348: [GC 59114K->52749K(97244K), 0.0021595 secs]
2010-04-22T18:12:27.859375: 22.380: [GC 58957K->53335K(97244K), 0.0022615 secs]
2010-04-22T18:12:27.890375: 22.409: [GC 59543K->53385K(97244K), 0.0024157 secs]

You may note the date format is not 100% the same as the one with -XX:+PrintGCDateStamps argument, but it should be enough to get an idea of when each GC event happened (Timezone management in Python is way out of the scope of this blog entry).

This has been my first blog entry for The Server Labs and I hope some of you find it useful. Of course, all comments, suggestions and feedback are very welcome.

Intellectual Property (IPR) Management and Monitoring Tools

It seems that every day projects have more and more dependencies on libraries (internal or external) and, of course, many of these depend on other libraries, resulting in a large dependency tree for any given project. How do you know if any of those libraries contain some code which is licensed in a way that is incompatible with your company’s policies e.g. no GPL?

BT (the former British Telecom) apparently didn’t and ended up having to publish all the code used in one of the routers it distributes due to a GPL violation.

To give you an idea of the scale of this problem, doing a quick search of my local Maven repository reveals that it has 1760 JAR files in it. Admittedly not all of these belong to one single project but maybe they are spread out over 20 different projects. It is pretty infeasible to try to manage such a task manually.

Tools like Maven are a great help for managing dependency trees in your project but doesn’t help much with checking the licenses that each dependency uses. The pom.xml file permits the use of a <license> element but it is optional, many libraries either don’t use Maven or don’t specify the license and you have to check compliance manually in any case.

This is where IPR monitoring tools come in. Such tools allow the definition of licensing policies at an organizational level and provide mechanisms to monitor compliance with these policies in software projects, raising alerts on detected violations.

We recently had to take a look at such tools for one of our clients. After studying the market, we discovered that are currently no open-source solutions covering this problem domain, but several commercial tools address the problem of continuous IPR monitoring.

For reference purposes, here is a list of the providers that we discovered:

IPR Management Tool Site
Palamida Compliance Edition http://www.palamida.com
Black Duck Protex http://www.blackducksoftware.com/protex
Protecode http://www.protecode.com
HiSoftware AccVerify http://www.hisoftware.com
OpenLogic Library or Enterprise Edition http://www.openlogic.com

All of these commercial products offer common features:

  • Automated binary and source code analysis with multi-language support (Java, C/C++, C#,
    Visual Basic, Perl, Python, PHP). The analysis is performed against an external proprietary
    database that contains the code of most open-source products.
  • Provide workflows in order to control the IPR of the software projects through the whole
    lifecycle, based on defined licensing policies.
  • Approval/disapproval licensing mechanisms as well as billing of materials for
    software releases summarizing components, licenses, approval status and license/policy
    violations.
  • Different levels of code fragment recognition to detect reuse of code.
  • User interfaces offering policy management, reporting and dashboard features.
  • Support for integration of code scan in Continuous Integration platforms via command line
    interface execution.

We think that these products are going to become increasingly important as the total number of libraries used in projects shows no sign of decreasing and there will always be a need to protect intellectual property.

Developing applications with SCA and a JBI-Based supporting infrastructure

We have been working with SOA technologies and solutions in the commercial and open-source arena for some years now and I would like to start a new series with this post covering the developments of two mayor standardisation efforts in this area, SCA (Service Component Architecture) and JBI (Java Business Integration).

While for some time SCA and JBI were presented and considered competitors, it is now a quite accepted idea in the industry that these standards cover different standardisation areas. They can be used separately but also used together to get the best of breed solutions.

SCA main benefit is that provides a technology-agnostic generic programming model that decouples the components implementation from their communication, allowing high level of reuse. Applications developed following the SCA model should be able to be deployed without changes in different SCA vendor platforms and following different integration and deployments patterns, depending on project needs. This would help to clearly separate application concerns, allowing developers to focus on services business logic while integration and deployment issues can be handled by architects and integrators.

In the other hand, JBI standarises a Java-based integration infrastructure where components of different vendors can interact in a standard fashion. In many ways, this standard is currently used to implement standarised ESBs and can provide the integration platform where SCA applications can run.

I was especially interested in solutions implementing the mix, offering SCA to provide the level of standardisation at application composition level while using JBI to provide the standard integration and runtime infrastructure, in the form of an Enterprise Service Bus (ESB). Examples of JBI implementations of ESBs are Apache ServiceMix, OpenESB or OW2 PEtALS.

In this area, we can find several efforts, mainly the Eclipse Swordfish project and OW2 PEtALS.
Eclipse Swordfish looks a very promising project, mixing JBI and OSGI to implement a fully distributed Enterprise Service Bus infrastructure where SCA-based applications can be run. However, at this moment SCA support is quite limited. OW2 PEtALS offers also a distributed ESB solution based on JBI only, and has a SCA service engine to run SCA composed applications based on OW2 Frascati. To know more about how this is implemented, have a look to this presentation from PEtALS guys.

So I decided to try and use OW2 PEtALS to run a simple SCA calculator, similar to the Apache Tuscany calculator sample. My objective was to verify the value of developing a SOA solution with SCA, using the integration features of JBI as a mediator ESB, exploring the possibilities of distributed ESB features and the extensibility via new JBI components, as JBI4Corba.

In order to follow this post, it will be helpful that you are familiar with the JBI and SCA concepts.

To support the development I used the latest Eclipse 3.5 Galileo fully loaded with SOA Tools, that include the SCA Tools. These tools provide a nice graphical environment to develop SCA composites as we will see.
Additionally, PEtALS offers a series of Eclipse plugins that make developer´s life easier for creating JBI Service Units (SU) and JBI Service Assemblies (SA). I was nicely surprised to see that they give the user the possibility to setup simple projects or maven projects. Also, it is nice to see the use of maven archetypes all over the place.
As you can imagine, I decided to go the maven way, making my life much easier. They offer a quite good developer manual providing all the information to setup the development environment.

So, the complete list of the required gear is:

Overview

We want to deploy a SCA Calculator implemented as Java components and deploy it in the PEtALS ESB (as depicted below), using a SOAP Binding Component to expose the application as a WS to the external world. We will use SOAPUI to test the application.

SCA Calculator exposed as SOAP WS in PEtALS.

SCA Calculator exposed as SOAP WS in PEtALS.

In order to deploy this we need to configure and develop the following artifacts:

  1. Install necessary PEtALS components into the ESB, the SOAP BC and the SCA SE.
  2. Create a JBI Service Unit (SU Provide) containing the SCA composite to be deployed against the SCA SE.
  3. Create a JBI Service Unit (SU Consume) to expose the SCA composite via SOAP WS.
  4. Create a JBI Service Assembly (SA) containing the two SU, ready to be deployed into the ESB.

The complete sources of the article can be found here (Maven Projects).

Install necessary PEtALS components into the ESB

Starting with the quickstart PEtALS distribution makes everything really simple. You need to simply start the bus with the command:

/bin/startup.sh -C (Will start in console mode, very handy)

If you are in Windows there is a typo in the documentation and needs to be started with a lowercase -c.

To install the SOAP BC and the SCA SE is as simple as copying the two zip files for the components into the “install” directory of the PETALS_HOME. The components will be automatically deployed into the bus.
You can also deploy any BC, SE or SA using the terminal console or via the web console.

Make sure you read the limitations (e.g. redeployment of SU) and apply the required changes that are currently needed for the SCA Service Engine (i.e. isolatedClassLoader) as described in the documentation of the component

The webconsole will be available in http://localhost:7878/.

Let´s start with the difficult bit creating the SCA SU. The others, the SOAP SU and SA, will be very simple.

Create a JBI Service Unit (SU Provide) containing the SCA composite

Generate Maven Project
We can generate an empty maven Service Unit project in several ways, using maven archetypes from the command line of via the PEtALS Eclipse plugins. I found the Eclipse plugins more complete (see picture below) as they are able to create empty projects already targeted to work as consume/provide SU for existing PEtALS components.

PEtALS Eclipse plugins

PEtALS Eclipse plugins

For this SU, we need to define a new project that deploys into the SCA SE, therefore we need to select “Use PEtALS technical service -> Use SCA“.
You will have to fill some fields like name of the Composite (e.g. “Calculator”), the target namespace (e.g. “http://demo.sca.theserverlabs.com”).

This will create a maven project (you need to select it), let´s call the project “su-SCA-Calculator-provide”. You will need to fill the specific pom.xml entries related to the SU (i.e. groupId, modelVersion and version). Also, remove the parent definitions added by default.

Make sure to have a “description” on the pom.xml as it will be used later by the Service assembly to populate some fields.

Create SCA Composite
The wizard has already created the SCA composite, called “Calculator.composite” under “/src/main/jbi” directory. The composites must be defined in this directory so the files will be kept at root level of the SU when building the project, and not included in the jar file created with the SCA artifact code.
To associate a SCA Composite diagram to the xml configuration file, right click in the SCA composite file and select “SCA->Initialize SCA Composite Diagram file”. This will create a “.composite_diagram” file that will be always on sync with the xml configuration and viceversa.

Once the composite is created we can modify it via xml or via the designer. The SCA Composite for the Calculator would look something like this:

Calculator SCA Composite

Calculator SCA Composite

As you can see our composite is very simple and has a CalculatorService which interface is exposed as a service out of the composite and has four references to other composite components, providing the add, subtract, divide and multiply implementations.

The SCA composite configuration file, would look like this:



	
		
		
	
	
		
		
			
		
		
			
		
		
			
		
		
			
		
		
			
		
	
	
		
		
	
	
		
		
	
	
		
		
	
	
		
		
	

Note that at the beginning of the file, the CalculatorService interface is promoted as a composite service. The SCA binding defined for this is the frascati JBI binding, which will register the service in the PEtALS bus as an internal JBI endpoint. The CalculatorService references are wired with the rest of the components via “target”.

Create required WSDL files for promoted SCA services

JBI message exchange Model is based on WSDL and as described in the SCA SE documentation, the SU package must contain a WSDL describing for each composite the promoted services. In our case, we need to provide a WSDL for the CalculatorService.
Currently, the WSDL must be provided in document/literal wrapped style and it is not automatically generated. However, there are tools like Java2WSDL of Apache axis that allow us to create it in a simple way.

The example apache axis command to generate the WSDL would be:

java org.apache.axis.wsdl.Java2WSDL  -y WRAPPED -u LITERAL  -l localhost   calculator.CalculatorService

This will generate a WSDL that should be copied in the “/src/main/jbi” directory under the name provided in the composite file, section “frascati:binding.jbi”, in our case “calculator.wsdl”.

We need to do a few changes in the generated WSDL to make it to work:

  1. Change the request message definitions from for instance “addRequest” to “add”, so it complies to wsdl “wrapper” style, having the same name for the input wrapper as the operation name.
  2. Align this change for the operations port type and binding sections.

I don´t include the final WSDL file as it is a bit long. You can find it in the sources package.

Configure jbi.xml file

Everything is ready and we just need to define which services are provided or consumed by this SU. In this case, the SU provides Services, so the jbi.xml file would look like:




	

		

			calculator.wsdl

			Calculator.composite
		
	

For each provided service we need to define the interface-name, service-name and endpoint-name. It is important to match the calculator.wsdl binding definitions with this information. Therefore, interface-name must match the wsdl portype name and the endpoint-name must match the wsdl port..
We also need to inform PEtALS of which is the wsdl file that describes the service provided, and specifically for the SCA component, which is the composite file.

Build the project

The last step to do is to build the project with a simple:

mvn install

Make sure you have included the PEtALS maven repository into your maven configuration, so it will find the required artifacts. This is well described in their development guide.

Create a JBI Service Unit (SU Consume) to expose the SCA composite via SOAP WS

This SU is much simpler. The component just needs the information for the JBI internal service that must consume and it will do the rest for us.

Using again the PEtALS Eclipse plugins, we create a new maven project (“New->Other->Petals->Expose Service from PEtALS->use SOAP”) called “su-SOAP-calculatorService-consume”. The wizard will allow you to define the Service you want to expose, so if you select the SCA SU Eclipse project, the rest of the fields will be automatically populated.

SOAP SU Consume Wizard

SOAP SU Consume Wizard

As previously done in the other SU, you will need to define the specific fields of the pom.xml related to this project, and don´t forget to define a “description” field.

This SU only contains the jbi.xml, defining which service must be exposed (consumed). We need to make sure that the consume section contains the references to the previously defined SCA SU. This would be the required jbi.xml:




	
	
	
	
		
		
	
			
			
		
				
			CalculatorService
			false
			SOAP
			soapbc
		
	

All fields are defined at the moment of project creation. The only important thing to make sure is to provide in the “consumes” section, the proper the interface-name, service-name and endpoint-name. These must much the ones those defined in the SCA SU, so the can talk to each other. If you selected the SCA SU Eclipse project in the SOAP SU creation Wizard, all these fields should be already correctly filled.

Once this is done you can build the project with a maven install.

Create a JBI Service Assembly (SA) for deployment

The deployment in the JBI ESB is performed via Service Assemblies, which can contain many SUs, each one bounded to different components (BC, SE, etc…). In our case we have a the SCA SU bounded to the SCA SE Component and the SOAP SU bounded to the SOAP BC Component. This is defined in the jbi.xml of the Service Assembly.

As before, we can use the PEtALS Eclipse plugin to create an empty SA maven project (“New->Other-PEtALS->SA Maven Project”), called “sa-SCA-Calculator”. The wizard will allow as to add the SUs we want into the SA so no more configuration is needed.

Once created, as on previous artifacts we need to be defined in the pom.xml the specific project parameters. Make sure you defined a description.

The SA project defines the SUs to be included via standard maven dependencies. That´s the only configuration step to perform (automatically done by the wizard) and the jbi.xml will be automatically created. The pom.xml of the SA would then look like:



	
	sa-SCA-Calculator
	A description of sa-SCA-Calculator
	1.0-SNAPSHOT
	jbi-service-assembly

	sa-SCA-Calculator
	com.tsl.sca
	4.0.0
	
	
		
			su-SCA-Calculator-provide
			com.tsl.sca
			1.0-SNAPSHOT
			jbi-service-unit
		
		
			su-SOAP-calculatorService-consume
			com.tsl.sca
			1.0-SNAPSHOT
			jbi-service-unit
		
	
	
		
			
				org.ow2.petals
				maven-petals-plugin
				
				true
		
	

Run a maven install to build the SA.

Deploy SA into PEtALS

The generated zip file, “sa-SCA-Calculator-1.0-SNAPSHOT.zip”, just needs to be copied to the PETALS_HOME/install directory and the SA will be automatically installed and started.

The PEtALS console should show information regarding the compilation and creation of required classed for the SCA and SOAP SUs:

SCA Calculator deployed in PEtALS

SCA Calculator deployed in PEtALS

Test the Service

The only thing left is to test that everything works. For that I used SOAUP, loading the WSDL from the SOAP WS services page (provided by the PEtALS component in http://localhost:8084/petals/services/CalculatorService?wsdl).

Testing SCA Calculator with SOAPUI

Testing SCA Calculator with SOAPUI

Future Posts

In future posts I will explore transparent deployment of SCA applications across a distributed ESB (such as PEtALS) and the usage of external references to services using Corba (e.g. replacing the add service by a external CORBA based service).
The later will also explore the usage of third party JBI components (as JBI4Corba) into PEtALS, as there is no PEtALS native JBI BC Component for Corba.