The Server Labs Blog Rotating Header Image

Cloud Computing

Genomic Processing in the Cloud

Introduction

Over the last decade, a new trend has manifested itself in the field of genomic processing. With the advent of the new generation of DNA sequencers has come an explosion in the throughput of DNA sequencing, resulting in the cost per base of generated sequences falling dramatically. Consequently, the bottleneck in sequencing projects around the world has shifted from obtaining DNA reads to the alignment and post-processing of the huge amount of read data now available. To minimize both processing time and memory requirements, specialized algorithms have been developed that trade off speed and memory requirements with the sensitivity of their alignments. Examples include the ultrafast memory-efficient short read aligners BWA and Bowtie, both based on the Burrows-Wheeler transform.

The need to analyse increasingly large amounts of genomics and proteomics data has meant that research labs allocate an increasing part of their time and budget provisioning, managing and maintaining their computational infrastructure. A possible solution to meet their needs for on-demand computational power is to take advantage of the public cloud. With its on-demand operational model, labs can benefit from considerable cost-savings by only paying for hardware when needed for their experiments. Procurement of new hardware is also simplified and more easily justified, without the need to expand in-house resources.

Not only does the cloud reduce the time to provision new hardware; it also provides significant time-savings by automating the installation and customization of the software that runs on top of the hardware. A controlled computational environment for the post-processing of experiments allows results to be more easily reproduced, a key objective to researchers across all disciplines. Results can also be easily shared among researchers, as cloud-based services facilitate the publishing of data over the internet, while allowing researches control over their access. Finally, data storage in the cloud was designed from the ground-up with high-availability and durability as key objectives. By storing their experiment data in the cloud, researchers can ensure their data is safely replicated among data centres. These advantages free researchers from time-consuming operational concerns, such as in-house backups and the provisioning and management of servers from which to share their experiment results.

Given the vast potential benefits of the cloud, The Server Labs is working with the Bioinformatics Department at the Spanish National Cancer Research Institute (CNIO) to develop a cloud-based solution that would meet their genomic processing needs.

An Environment for Genomic Processing in the Cloud

The first step towards carrying out genomic processing in the cloud is identifying a suitable computational environment, including hardware architecture, operating system and genomic processing tools. CNIO helped us identify the following software packages employed in their typical genomic processing workflows:

  • Burrows-Wheeler Alignment Tool (BWA): BWA aligns short DNA sequences (reads) to a sequence database such as the human reference genome.
  • Novoalign: Novoalign is a DNA short read mapper implemented by Novocraft Technologies. The tool uses spaced-seed indexing to align either single or paired-end reads by means of Needleman-Wunsch algorithm. The source code is not available for download. However, anybody may download and use these programs free of charge for their research and any other non-profit activities as long as results are published in open journals.
  • SAM tools: After reads alignment, one might want to call variants or view the alignments against the reference genome. SAM tools is an open-source package of software aplications which includes an alignments viewer and a consensus base caller tool to provide lists of variants (somatic mutations, SNPs and indels).
  • BEDTools: This software facilitates common genomics tasks for the comparison, manipulation and annotation of genomic features in Browser Extensible Data (.BED) format. BEDTools supports the comparison of sequence alignments allowing the user to compare next-generation sequencing data with both public and custom genome annotation tracks. BEDTools source code in freely available.

Note that, except for Novoalign, all software packages listed above are open source and freely available.

One of the requirements of these tools is that the underlying hardware architecture is 64-bit. For our initial proof of concept, we decided to run a base image with Ubuntu 9.10 for 64-bit on an Amazon EC2 large instance with 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) and 850 GB of local instance storage. Once we had selected the base image and instance type to run on the cloud, we proceeded to automate the installation and configuration of the software packages. Automating this step ensures that no additional setup tasks are required when launching new instances in the cloud, and provides a controlled and reproducible environment for genomic processing.

By using the RightScale cloud-management platform, we were able to separate out the selection of instance type and base image from the installation and configuration of software specific to genomic processing. First, we created a Server definition for the instance type and operating system specified above. We then scripted the installation and configuration of genomic processing tools, as well as any OS customizations, so that these steps can be executed automatically after new instances are first booted.

Once our new instances were up and running and the software environment finalized, we executed some typical genomic workflows suggested to us by CNIO. We found that for their typical workflow with a raw data input between 3 and 20 GB, the total processing time on the cloud would range between 1 and 4 hours, depending on the size of the raw data and whether the type of alignment was single or paired-end. With an EC2 instance pricing at 38 cents per hour for large instances, and ignoring additional time required for customization of the workflow, the cost of pure processing tasks totalled less than $2 for a single experiment.

We also found the processing times to be comparable to running the same workflow in-house on similar hardware. However, when processing on the cloud, we found that transferring the raw input data from the lab to the cloud data centre could become a bottleneck, depending on the bandwidth available. We were able to work around this limitation by processing our data on Amazon’s European data centre and avoiding peak-hours for our data uploads.

Below, we include an example workflow for paired-end alignment that we successfully carried out in the Amazon cloud:

Example single-end workflow executed in the cloud.

Example single-end alignment workflow executed in the cloud.

Maximizing the Advantages of the Cloud

In the first phase, we demonstrated that genomic processing in the cloud is feasible and cost-effective, while providing a performance on par with in-house hardware. To truly realize the benefits of the cloud, however, what we need is an architecture that allows tens or hundreds of experiment jobs to be processed in parallel. This would allow researchers, for instance, to run algorithms with slightly different parameters to analyse the impact on their experiment results. At the same time, we would like a framework which incorporates all of the strengths of the cloud, in particular data durability, publishing mechanisms and audit trails to make experiment results reproducible.

To meet these goals, The Server Labs is developing a genomic processing platform which builds on top of RightScale’s RightGrid batch processing system. Our platform facilitates the processing of a large number of jobs by leveraging Amazon’s EC2, SQS, and S3 web services in a scalable and cost efficient manner to match demand. The framework also takes care of scheduling, load management, and data transport, so that the genomic workflow can be executed locally on experiment data available to the EC2 worker instance. By using S3 to store the data, we ensure that any input and result data is highly available and persisted between data centres, freeing our users from the need to backup their data. It also ensures that data can be more easily shared with the appropriate level of access control among institutions and researchers. In addition, the monitoring and logging of job submissions provides a convenient mechanism for the production of audit trails for all processing tasks.

The following diagram illustrates the main components of The Server Labs Genomic Processing Cloud Framework:

The Server Labs Genomic Processing Framework

The Server Labs Genomic Processing Cloud Framework.

The Worker Daemon is based on The Server Labs Genomic Processing Server Template, which provides the software stack for genomic processing. It automatically pulls experiment tasks from the SQS input queue along with input data from S3 to launch the genomic processing workflow with the appropriate configuration parameters. Any intermediate and final results are stored in S3 and completed jobs are stored in SQS for auditing.

Cost Analysis

Given a RightGrid-based solution for genomic processing, we would like to analyse how much it would cost to run CNIO’s workflows on the Amazon cloud. Let us assume for the sake of our analysis that CNIO runs 10 experiments in the average month, each of which generate an average of 10 GB of raw input data and produce an additional 20 GB of result data. For each of these experiments, CNIO wishes to run 4 different workflows, with an average running time of 2 hours on a large EC2 instance. In addition, we assume that the experiment results are downloaded once to the CNIO in-house data center. We also assume that the customer already has a RightScale account, the cost of which is not included in the analysis.

Amazon Service Cost
SQS Negligible
S3
  • Data transfer in: $0.10 per GB * 10 GB per workflow = $1 per workflow
  • Data transfer out: 1 download per workflow * 20 GB per download * $0.15 per GB = $3 per workflow
  • Storage: 30 GB per workflow * $0.14 per GB = $4.20 per workflow
  • Total cost: $8.20 per workflow
EC2 $0.38 per hour * 2 hours per workflow = $0.76 per workflow
All services Total cost: $8.20 + $0.76 = $8.96 per workflow

Total Cost:
10 experiments per month * 4 workflows per experiment * $8.96 per workflow =
$358.40 per month or $4300 per year

Towards an On-demand Genomic Processing Service

By building on the RightGrid framework, The Server Labs is able to offer a robust cloud-based platform on which to perform on-demand genomic processing tasks, at the same time enabling experiment results to be more easily reproduced, stored and published. To make genomic processing even simpler on the cloud, the on-demand model can be taken even one step forward by providing a pay-as-you-go software as a service. In such a model, researchers are agnostic to the fact that the processing of their data is done in the cloud. Instead, they would interact with the platform via a web interface, where they would be able to upload their experiment’s raw input data, select their workflow of choice, and choose whether or not to share their result data. They would then be notified asynchronously via email once the processing of their experiment data has been completed.

References

Low Cost, Scalable Proteomics Data Analysis Using Amazon’s Cloud Computing Services and Open Source Search Algorithms

How to map billions of short reads onto genomes

The RightGrid batch-processing framework

The Server Labs at CeBIT 2010

Come and see us at CeBIT 2010 where we will have a booth on Amazon’s Stand in the Main Exhibition Hall: Hall 2, Stand B26. We will be there on March the 4th and March the 5th.

We will be happy to chat to you about our added value in bringing IT Architecture solutions to the Cloud.

We will also be presenting the work on HPC we have been doing for the European Space Agency, at 13:30 on March the 4th, and at 11:00 on March the 5th, both in the Theatre at the back of the Amazon stand.


Complex low-cost HPC Data Processing in the Cloud

With the maturing of cloud computing, it is now feasible to run even the most complex HPC applications in the cloud. Data storage and high performance computing resources - fundamental for these applications – can be outsourced thus leveraging scalability, flexibility and high availability at a fraction of the cost of traditional in-house data processing. This presentation evaluates Amazon’s EC2/S3 suitability for such a scenario, by running a distributed astrometric process The Server Labs developed for the European Space Agency’s Gaia mission in Amazon EC2
.

Eating our own Dog Food! – The Server Labs moves its Lab to the Cloud!

cloudlab

After all these years dealing with servers, switches, routers and virtualisation technologies we think it´s time to move our lab into the next phase, the Cloud, specifically the Amazon EC2 Cloud.

We are actively working in the Cloud now for different projects, as you´ve seen in previous blog posts. We believe and feel this step is not only a natural one but also takes us in the right direction towards a more effective management of resources and higher business agility. This fits with the needs of a company like ours and we believe it will also fit for many others of different sizes and requirements.
Cloud computing is not only a niche for special projects with very specific needs. It can be used by normal companies to have a more cost effective It infrastructure, at least in certain areas.

In our lab we had a mixture of server configurations, comprising Sun and Dell servers running all kinds of OSs, using VMWare and Sun virtualisation technology. The purpose of our Lab is to provide an infrastructure for our staff, partners and customers to perform specific tests, prototypes, PoC´s, etc… Also, the Lab is our R & D resource to create new architecture solutions.

Moving our Lab to the cloud will provide an infrastructure that will be more flexible, manageable, powerful, simple and definitely more elastic to setup, use and maintain, without removing any of the features we currently have. It will also allow us to concentrate more in this new paradigm, creating advanced cloud architectures and increasing the overall know-how, that can be injected back to customers and the community.

In order to commence this small project the first thing to do was to perform a small feasibility study to identify the different technologies to use inside the cloud to maintain confidentiality and secure access primarily, but also to properly manage and monitor that infrastructure. Additionally, one of the main drivers of this activity was to reduce our monthly hosting cost, so we needed to calculate, based on the current usage, the savings of moving to the cloud.

Cost Study

Looking at the cost for moving to the cloud we performed an inventory of the required CPU power, server instances, storage (for both Amazon S3 and EBS) and the estimated data IO. Additionally, we did an estimation of the volume of data between nodes and between Amazon and the external world.

We initially thought to automatically shutdown and bring up those servers that are only needed during working hours to save more money. In the end, we will be using Amazon reserved instances, that give a even lower per-hour price similar to the one that we would get using on-demand servers.

Based on this inventory and estimations, and with the help of the Amazon Cost Calculator, we reached a final monthly cost that was aprox. 1/3 of our hosting bill!.

This cost is purely considering the physical infrastructure. We need to add on top of this the savings we have on hardware renewal, pure system administration and system installation. Even if we use virtualization technologies, sometimes we´ve had to rearrange things as our physical infrastructure was limited. All these extra costs mean savings on the cloud.

Feasibility Study

Moving to the cloud gives a feeling to most IT managers that they lose control and most importantly, they lose control of the data. While the usage of hybrid clouds can permit the control of the data, in our case we wanted to move everything to the cloud. In this case, we are certainly not different and we are quite paranoid with our data and how would be stored in Amazon EC2. Also, we still require secure network communication between or nodes in the Amazon network and the ability to give secure external access to our staff and customers.

There are a set of open-source technologies that have helped us to materialize these requirements into a solution that we feel comfortable with:

  • Filesystem encryption for securing data storage in Amazon EBS.
  • Private network and IP range for all nodes
  • Cloud-wide encrypted communication between nodes within a private IP network range via OpenVPN solution
  • IPSec VPN solution for external permanent access to the Cloud Lab, as for instance connection of private cloud/network to public EC2 Cloud
  • Use of RightScale to manage and automate the infrastructure
Overview of TSL Secure Cloud deployment

Overview of TSL Secure Cloud deployment

Implementation and Migration

The implementation of our Cloud Lab solution has gone very smoothly and it is working perfectly.
One of the beneficial side effects you get when migrating different systems into the cloud is that it forces you to be much more organised as the infrastructure is very focused on reutilisation and automatic recreation of the different servers.

We have all our Lab images standardized, taking prebuilt images available in Amazon and customising them to include the security hardening, standard services and conventions we have defined. We can in a matter of seconds deploy new images and include them into our Secure VPN-Based CloudLab network ready to be used.

Our new Cloud Lab is giving us a very stable, cost-effective, elastic and secure infrastructure, which can be rebuilt in minutes using EBS snapshots.

Developing applications with SCA and a JBI-Based supporting infrastructure

We have been working with SOA technologies and solutions in the commercial and open-source arena for some years now and I would like to start a new series with this post covering the developments of two mayor standardisation efforts in this area, SCA (Service Component Architecture) and JBI (Java Business Integration).

While for some time SCA and JBI were presented and considered competitors, it is now a quite accepted idea in the industry that these standards cover different standardisation areas. They can be used separately but also used together to get the best of breed solutions.

SCA main benefit is that provides a technology-agnostic generic programming model that decouples the components implementation from their communication, allowing high level of reuse. Applications developed following the SCA model should be able to be deployed without changes in different SCA vendor platforms and following different integration and deployments patterns, depending on project needs. This would help to clearly separate application concerns, allowing developers to focus on services business logic while integration and deployment issues can be handled by architects and integrators.

In the other hand, JBI standarises a Java-based integration infrastructure where components of different vendors can interact in a standard fashion. In many ways, this standard is currently used to implement standarised ESBs and can provide the integration platform where SCA applications can run.

I was especially interested in solutions implementing the mix, offering SCA to provide the level of standardisation at application composition level while using JBI to provide the standard integration and runtime infrastructure, in the form of an Enterprise Service Bus (ESB). Examples of JBI implementations of ESBs are Apache ServiceMix, OpenESB or OW2 PEtALS.

In this area, we can find several efforts, mainly the Eclipse Swordfish project and OW2 PEtALS.
Eclipse Swordfish looks a very promising project, mixing JBI and OSGI to implement a fully distributed Enterprise Service Bus infrastructure where SCA-based applications can be run. However, at this moment SCA support is quite limited. OW2 PEtALS offers also a distributed ESB solution based on JBI only, and has a SCA service engine to run SCA composed applications based on OW2 Frascati. To know more about how this is implemented, have a look to this presentation from PEtALS guys.

So I decided to try and use OW2 PEtALS to run a simple SCA calculator, similar to the Apache Tuscany calculator sample. My objective was to verify the value of developing a SOA solution with SCA, using the integration features of JBI as a mediator ESB, exploring the possibilities of distributed ESB features and the extensibility via new JBI components, as JBI4Corba.

In order to follow this post, it will be helpful that you are familiar with the JBI and SCA concepts.

To support the development I used the latest Eclipse 3.5 Galileo fully loaded with SOA Tools, that include the SCA Tools. These tools provide a nice graphical environment to develop SCA composites as we will see.
Additionally, PEtALS offers a series of Eclipse plugins that make developer´s life easier for creating JBI Service Units (SU) and JBI Service Assemblies (SA). I was nicely surprised to see that they give the user the possibility to setup simple projects or maven projects. Also, it is nice to see the use of maven archetypes all over the place.
As you can imagine, I decided to go the maven way, making my life much easier. They offer a quite good developer manual providing all the information to setup the development environment.

So, the complete list of the required gear is:

Overview

We want to deploy a SCA Calculator implemented as Java components and deploy it in the PEtALS ESB (as depicted below), using a SOAP Binding Component to expose the application as a WS to the external world. We will use SOAPUI to test the application.

SCA Calculator exposed as SOAP WS in PEtALS.

SCA Calculator exposed as SOAP WS in PEtALS.

In order to deploy this we need to configure and develop the following artifacts:

  1. Install necessary PEtALS components into the ESB, the SOAP BC and the SCA SE.
  2. Create a JBI Service Unit (SU Provide) containing the SCA composite to be deployed against the SCA SE.
  3. Create a JBI Service Unit (SU Consume) to expose the SCA composite via SOAP WS.
  4. Create a JBI Service Assembly (SA) containing the two SU, ready to be deployed into the ESB.

The complete sources of the article can be found here (Maven Projects).

Install necessary PEtALS components into the ESB

Starting with the quickstart PEtALS distribution makes everything really simple. You need to simply start the bus with the command:

/bin/startup.sh -C (Will start in console mode, very handy)

If you are in Windows there is a typo in the documentation and needs to be started with a lowercase -c.

To install the SOAP BC and the SCA SE is as simple as copying the two zip files for the components into the “install” directory of the PETALS_HOME. The components will be automatically deployed into the bus.
You can also deploy any BC, SE or SA using the terminal console or via the web console.

Make sure you read the limitations (e.g. redeployment of SU) and apply the required changes that are currently needed for the SCA Service Engine (i.e. isolatedClassLoader) as described in the documentation of the component

The webconsole will be available in http://localhost:7878/.

Let´s start with the difficult bit creating the SCA SU. The others, the SOAP SU and SA, will be very simple.

Create a JBI Service Unit (SU Provide) containing the SCA composite

Generate Maven Project
We can generate an empty maven Service Unit project in several ways, using maven archetypes from the command line of via the PEtALS Eclipse plugins. I found the Eclipse plugins more complete (see picture below) as they are able to create empty projects already targeted to work as consume/provide SU for existing PEtALS components.

PEtALS Eclipse plugins

PEtALS Eclipse plugins

For this SU, we need to define a new project that deploys into the SCA SE, therefore we need to select “Use PEtALS technical service -> Use SCA“.
You will have to fill some fields like name of the Composite (e.g. “Calculator”), the target namespace (e.g. “http://demo.sca.theserverlabs.com”).

This will create a maven project (you need to select it), let´s call the project “su-SCA-Calculator-provide”. You will need to fill the specific pom.xml entries related to the SU (i.e. groupId, modelVersion and version). Also, remove the parent definitions added by default.

Make sure to have a “description” on the pom.xml as it will be used later by the Service assembly to populate some fields.

Create SCA Composite
The wizard has already created the SCA composite, called “Calculator.composite” under “/src/main/jbi” directory. The composites must be defined in this directory so the files will be kept at root level of the SU when building the project, and not included in the jar file created with the SCA artifact code.
To associate a SCA Composite diagram to the xml configuration file, right click in the SCA composite file and select “SCA->Initialize SCA Composite Diagram file”. This will create a “.composite_diagram” file that will be always on sync with the xml configuration and viceversa.

Once the composite is created we can modify it via xml or via the designer. The SCA Composite for the Calculator would look something like this:

Calculator SCA Composite

Calculator SCA Composite

As you can see our composite is very simple and has a CalculatorService which interface is exposed as a service out of the composite and has four references to other composite components, providing the add, subtract, divide and multiply implementations.

The SCA composite configuration file, would look like this:



	
		
		
	
	
		
		
			
		
		
			
		
		
			
		
		
			
		
		
			
		
	
	
		
		
	
	
		
		
	
	
		
		
	
	
		
		
	

Note that at the beginning of the file, the CalculatorService interface is promoted as a composite service. The SCA binding defined for this is the frascati JBI binding, which will register the service in the PEtALS bus as an internal JBI endpoint. The CalculatorService references are wired with the rest of the components via “target”.

Create required WSDL files for promoted SCA services

JBI message exchange Model is based on WSDL and as described in the SCA SE documentation, the SU package must contain a WSDL describing for each composite the promoted services. In our case, we need to provide a WSDL for the CalculatorService.
Currently, the WSDL must be provided in document/literal wrapped style and it is not automatically generated. However, there are tools like Java2WSDL of Apache axis that allow us to create it in a simple way.

The example apache axis command to generate the WSDL would be:

java org.apache.axis.wsdl.Java2WSDL  -y WRAPPED -u LITERAL  -l localhost   calculator.CalculatorService

This will generate a WSDL that should be copied in the “/src/main/jbi” directory under the name provided in the composite file, section “frascati:binding.jbi”, in our case “calculator.wsdl”.

We need to do a few changes in the generated WSDL to make it to work:

  1. Change the request message definitions from for instance “addRequest” to “add”, so it complies to wsdl “wrapper” style, having the same name for the input wrapper as the operation name.
  2. Align this change for the operations port type and binding sections.

I don´t include the final WSDL file as it is a bit long. You can find it in the sources package.

Configure jbi.xml file

Everything is ready and we just need to define which services are provided or consumed by this SU. In this case, the SU provides Services, so the jbi.xml file would look like:




	

		

			calculator.wsdl

			Calculator.composite
		
	

For each provided service we need to define the interface-name, service-name and endpoint-name. It is important to match the calculator.wsdl binding definitions with this information. Therefore, interface-name must match the wsdl portype name and the endpoint-name must match the wsdl port..
We also need to inform PEtALS of which is the wsdl file that describes the service provided, and specifically for the SCA component, which is the composite file.

Build the project

The last step to do is to build the project with a simple:

mvn install

Make sure you have included the PEtALS maven repository into your maven configuration, so it will find the required artifacts. This is well described in their development guide.

Create a JBI Service Unit (SU Consume) to expose the SCA composite via SOAP WS

This SU is much simpler. The component just needs the information for the JBI internal service that must consume and it will do the rest for us.

Using again the PEtALS Eclipse plugins, we create a new maven project (“New->Other->Petals->Expose Service from PEtALS->use SOAP”) called “su-SOAP-calculatorService-consume”. The wizard will allow you to define the Service you want to expose, so if you select the SCA SU Eclipse project, the rest of the fields will be automatically populated.

SOAP SU Consume Wizard

SOAP SU Consume Wizard

As previously done in the other SU, you will need to define the specific fields of the pom.xml related to this project, and don´t forget to define a “description” field.

This SU only contains the jbi.xml, defining which service must be exposed (consumed). We need to make sure that the consume section contains the references to the previously defined SCA SU. This would be the required jbi.xml:




	
	
	
	
		
		
	
			
			
		
				
			CalculatorService
			false
			SOAP
			soapbc
		
	

All fields are defined at the moment of project creation. The only important thing to make sure is to provide in the “consumes” section, the proper the interface-name, service-name and endpoint-name. These must much the ones those defined in the SCA SU, so the can talk to each other. If you selected the SCA SU Eclipse project in the SOAP SU creation Wizard, all these fields should be already correctly filled.

Once this is done you can build the project with a maven install.

Create a JBI Service Assembly (SA) for deployment

The deployment in the JBI ESB is performed via Service Assemblies, which can contain many SUs, each one bounded to different components (BC, SE, etc…). In our case we have a the SCA SU bounded to the SCA SE Component and the SOAP SU bounded to the SOAP BC Component. This is defined in the jbi.xml of the Service Assembly.

As before, we can use the PEtALS Eclipse plugin to create an empty SA maven project (“New->Other-PEtALS->SA Maven Project”), called “sa-SCA-Calculator”. The wizard will allow as to add the SUs we want into the SA so no more configuration is needed.

Once created, as on previous artifacts we need to be defined in the pom.xml the specific project parameters. Make sure you defined a description.

The SA project defines the SUs to be included via standard maven dependencies. That´s the only configuration step to perform (automatically done by the wizard) and the jbi.xml will be automatically created. The pom.xml of the SA would then look like:



	
	sa-SCA-Calculator
	A description of sa-SCA-Calculator
	1.0-SNAPSHOT
	jbi-service-assembly

	sa-SCA-Calculator
	com.tsl.sca
	4.0.0
	
	
		
			su-SCA-Calculator-provide
			com.tsl.sca
			1.0-SNAPSHOT
			jbi-service-unit
		
		
			su-SOAP-calculatorService-consume
			com.tsl.sca
			1.0-SNAPSHOT
			jbi-service-unit
		
	
	
		
			
				org.ow2.petals
				maven-petals-plugin
				
				true
		
	

Run a maven install to build the SA.

Deploy SA into PEtALS

The generated zip file, “sa-SCA-Calculator-1.0-SNAPSHOT.zip”, just needs to be copied to the PETALS_HOME/install directory and the SA will be automatically installed and started.

The PEtALS console should show information regarding the compilation and creation of required classed for the SCA and SOAP SUs:

SCA Calculator deployed in PEtALS

SCA Calculator deployed in PEtALS

Test the Service

The only thing left is to test that everything works. For that I used SOAUP, loading the WSDL from the SOAP WS services page (provided by the PEtALS component in http://localhost:8084/petals/services/CalculatorService?wsdl).

Testing SCA Calculator with SOAPUI

Testing SCA Calculator with SOAPUI

Future Posts

In future posts I will explore transparent deployment of SCA applications across a distributed ESB (such as PEtALS) and the usage of external references to services using Corba (e.g. replacing the add service by a external CORBA based service).
The later will also explore the usage of third party JBI components (as JBI4Corba) into PEtALS, as there is no PEtALS native JBI BC Component for Corba.

The Server Labs @ Cloud Computing Expo 09 – Update

The presentation we gave last month at Cloud Computing and Expo in Prague is now avaliable online below

Update 29 Jun 2009: Amazon Web Services

Amazon has just published this blog wholesale jerseys entry about The Server Labs’s Proof of Concept for ESA Scaling to the Stars