The Server Labs Blog Rotating Header Image


Eating our own Dog Food! – The Server Labs moves its Lab to the Cloud!


After all these years dealing with servers, switches, routers and virtualisation technologies we think it´s time to move our lab into the next phase, the Cloud, specifically the Amazon EC2 Cloud.

We are actively working in the Cloud now for different projects, as you´ve seen in previous blog posts. We believe and feel this step is not only a natural one but also takes us in the right direction towards a more effective management of resources and higher business agility. This fits with the needs of a company like ours and we believe it will also fit for many others of different sizes and requirements.
Cloud computing is not only a niche for special projects with very specific needs. It can be used by normal companies to have a more cost effective It infrastructure, at least in certain areas.

In our lab we had a mixture of server configurations, comprising Sun and Dell servers running all kinds of OSs, using VMWare and Sun virtualisation technology. The purpose of our Lab is to provide an infrastructure for our staff, partners and customers to perform specific tests, prototypes, PoC´s, etc… Also, the Lab is our R & D resource to create new architecture solutions.

Moving our Lab to the cloud will provide an infrastructure that will be more flexible, manageable, powerful, simple and definitely more elastic to setup, use and maintain, without removing any of the features we currently have. It will also allow us to concentrate more in this new paradigm, creating advanced cloud architectures and increasing the overall know-how, that can be injected back to customers and the community.

In order to commence this small project the first thing to do was to perform a small feasibility study to identify the different technologies to use inside the cloud to maintain confidentiality and secure access primarily, but also to properly manage and monitor that infrastructure. Additionally, one of the main drivers of this activity was to reduce our monthly hosting cost, so we needed to calculate, based on the current usage, the savings of moving to the cloud.

Cost Study

Looking at the cost for moving to the cloud we performed an inventory of the required CPU power, server instances, storage (for both Amazon S3 and EBS) and the estimated data IO. Additionally, we did an estimation of the volume of data between nodes and between Amazon and the external world.

We initially thought to automatically shutdown and bring up those servers that are only needed during working hours to save more money. In the end, we will be using Amazon reserved instances, that give a even lower per-hour price similar to the one that we would get using on-demand servers.

Based on this inventory and estimations, and with the help of the Amazon Cost Calculator, we reached a final monthly cost that was aprox. 1/3 of our hosting bill!.

This cost is purely considering the physical infrastructure. We need to add on top of this the savings we have on hardware renewal, pure system administration and system installation. Even if we use virtualization technologies, sometimes we´ve had to rearrange things as our physical infrastructure was limited. All these extra costs mean savings on the cloud.

Feasibility Study

Moving to the cloud gives a feeling to most IT managers that they lose control and most importantly, they lose control of the data. While the usage of hybrid clouds can permit the control of the data, in our case we wanted to move everything to the cloud. In this case, we are certainly not different and we are quite paranoid with our data and how would be stored in Amazon EC2. Also, we still require secure network communication between or nodes in the Amazon network and the ability to give secure external access to our staff and customers.

There are a set of open-source technologies that have helped us to materialize these requirements into a solution that we feel comfortable with:

  • Filesystem encryption for securing data storage in Amazon EBS.
  • Private network and IP range for all nodes
  • Cloud-wide encrypted communication between nodes within a private IP network range via OpenVPN solution
  • IPSec VPN solution for external permanent access to the Cloud Lab, as for instance connection of private cloud/network to public EC2 Cloud
  • Use of RightScale to manage and automate the infrastructure
Overview of TSL Secure Cloud deployment

Overview of TSL Secure Cloud deployment

Implementation and Migration

The implementation of our Cloud Lab solution has gone very smoothly and it is working perfectly.
One of the beneficial side effects you get when migrating different systems into the cloud is that it forces you to be much more organised as the infrastructure is very focused on reutilisation and automatic recreation of the different servers.

We have all our Lab images standardized, taking prebuilt images available in Amazon and customising them to include the security hardening, standard services and conventions we have defined. We can in a matter of seconds deploy new images and include them into our Secure VPN-Based CloudLab network ready to be used.

Our new Cloud Lab is giving us a very stable, cost-effective, elastic and secure infrastructure, which can be rebuilt in minutes using EBS snapshots.

Java HelloWorld @ the Cloud with Amazon EC2

This post is about how to run a simple java application installed in one of your Amazon AMI’s

Thanks to the GridGain guys who inspired me when I read this page about running GridGain in Amazon EC2

I am going to assume that you know how to create an Amazon AMI. For information on how to create one,iphone 6s plus remplacement écran please look here it.

In my scenario I took one of the Ubuntu 8.04 images, and I created a user helloworld as I do not want to run the process as root.

Once you have logged in your running instance follow the steps.

Step 1

I assume you have created your helloworld user.

su - helloworld

Step 2
Create your file


Then type your java favorite source code ever.

public class HelloWorld {

    public static void main(String[] args) {

        System.out.println("[Data passed to the AMI instance:] "+System.getProperty("userData"));


You can see in the source code that I am reading a system property userData. This is because we are going to send that data to the running EC2 instance at boot time.

Then compile the code


And test it. You should get a null as output. This is because we have not defined the property -DuserData

java HelloWorld

Step 3
Create your personalized AMI

If you want to run a java process at boot time. Before creating the AMI, opne the /etc/rc.local and add at the end the command that executes your java process

echo "Running java process " > /tmp/rc.local.log
su - helloworld /home/helloworld/ 

The might look like this

export JAVA_HOME=${HOME}/software/jdk1.6.0_14
export PATH=${JAVA_HOME}/bin:${PATH}
export USER_DATA=`GET`
echo ">>> [USER_DATA] >>> "${USER_DATA}
java ${USER_DATA} HelloWorld >> ${LOG_FILE} 2>&1

As you see, I am calling an Amazon web service to get the data I have passed to the AMI. That data is an string with the JMV args. So what I am going to pass is the string “-DuserData=Amazon_says_Hello_World”
passing the user data to the JVM as arguments.

Step 4
Once you have your AMI ready and registered, you only have to run it.

ec2-run-instances ${MY_AMI} -n ${NODES} -K ${EC2_PRIVATE_KEY} -C ${EC2_CERT} -g ${MY_SECURITY_GROUP} -z ${ZONE}  -t ${INSTANCE_TYPE} -d "-DuserData=\"Amazon_says_Hello_World\""

This line runs the instance in the Amazon Cloud. If you log in into the running instance and changes the user from root to helloworld. You can check the log file to see that the HelloWorld was executed. But you can also run it manually just by executing the script which you created before.

Even you can query the amazon web service. You only have to type


You should see at the prompt “-DuserData=Amazon_says_Hello_World”
If you run the helloWorld script you will see “[Data passed to the AMI instance:] Amazon_says_Hello_World”

Amazon allow you also to pass files at the ec2-run-instances so you could implement many ways of passing data, or configuration arguments to your processes.

Even if you have installed subversion you could check out for the latest configuration file of your process. You could ftp a server, etc… It is up to you the way you do it.

With this approach of passing an String, you can face some problems. For instance if you pass several JVM arguments such
you cannot put a white space between the different arguments because the ec2-run-instances command thinks that those are also parameters for him. The work around for this, only if you want to pass the data as an String, is to use the ${IFS} variable, which by default is the white space. Then in the script that runs the HelloWorld class, aster calling the web service for the data you have to add an extra line.

USER_DATA=`eval "echo ${USER_DATA}"`

This makes the $IFS to be evaluated, but this approach is a bit tricky and weird 😉

The Server Labs @ the Cloud Computing Expo Europe 09

We are glad to announce that we are going to publish a paper and give a talk at the Cloud Computing Expo Europe

Paper Title: Cloud Science: Astrometric Processing in Amazon EC2/S3

Paper Abstract:

With the maturing of cloud computing, it is now feasible to run scientific applications in the cloud. Data storage and high performance computing resources are fundamental for scientific applications. Outsourcing these services leverages scalability, flexibility, high availability at lower prices compared with traditional in-house data processing. This article evaluates Amazon’s EC2/S3 suitability for this scenario, by running a distributed astrometric process developed for the European Space Agency’s Gaia mission in Amazon EC2. The aim is to demonstrate how cloud computing systems can be a cost-effective solution for HPC applications.

We hope to see you there. Do not hesitate to contact us!!

Amazon releases EBS, Persistent Storage for EC2

Last week Amazon announced the release of Elastic Block Store (EBS), a block based persistent storage mechanism for EC2. This is very exciting news that will make a huge impact on the adoption of cloud computing and virtualisation in general.

I’m not going to go into a huge amount of detail here, if you want to know the full details I suggest you check out the blog entries from RightScale or from Amazon’s own CTO, Werner Vogels ’s blog entry on the subject.

EBS Volumes

Before EBS, any data you had was lost when you powered down the machine, unless you had backed it to S3. Now with EBS the volumes you mount will be persistent:

Amazon EBS volumes are created in a particular Availability Zone and can be from 1 GB to 1 TB in size. Once a volume is created, it can be attached to any Amazon EC2 instance in the same Availability Zone. Once attached, it will appear as a mounted device similar to any hard drive or other block device.


Although the EBS volumes are quite reliable, to achieve full reliability you should back up your data to S3, and this is done via the mechanism of EBS Snapshots. The interesting feature of these Snapshots is that they are incremental, only the blocks that have changed since the last Snapshot are written to S3.


Currently EBS is priced at a rate of $0.10 per allocated GB per month. They also charge you $0.10 per 1 million I/O requests you make to your volume so you should be careful how you use your volumes.

It should be mentioned that you pay for what you allocate, not what you use, so if you allocate 1TB straight away, even if you don’t use it, you will pay a lot at the end of the month.

What this means for computing

I believe that the release of Elastic Block Storage is going to make a huge impact on IT. Technology Startups and established businesses will now be able to test out their new ideas without having to fork out a lot for expensive equipment. And with the EC2 model you can even shut your instances down at night to further save on costs. We are entering a very exciting time for computing.

Looking further

If you want to play with EBS, Eric Hammond has written an excellent article describing how to run MySQL on Amazon EC2 with Elastic Block Store.