The Server Labs BizBlog Rotating Header Image

The Server Labs @ EAE Business School

A Walk in the Clouds

We have recently been invited by the EAE Business School to give a talk in a life webinar about cloud computing. With an audience of around 60 people from around the world,  the webinar, is forming part of a series of presentations focusing on the technologies and business models that coexist in the Internet..

The webinar – which we shared with Juan Miguel Gómez, PhD in Computer Science by the National University of Ireland, and EAE professor – addressed questions such as What is cloud computing? What are the key benefits to users, companies and employees? What different types of strategies do they use for cloud computing adoption? What will be the future of Cloud computing? What real cases have we worked on?

Following the talk, the questions during the Q&A session focused predominantly on security issues, how far would be Amazon’s competitors and which would be the right strategy for the cloud adoption amongst others.

We would like to invite you to see the complete webinar offline at the following link

Cloud computing webinar video

Learning from Amazon´s failure

At the end of April,  Amazon Web Services (AWS), the world leader in Cloud Computing Infrastructure as a Service, suffered a major outage affecting hundreds of websites that rely on AWS.

When Amazon went down, it took with it  many companies who run their web services in  Amazon data centre in Northern Virginia in the US East region. Companies such reddit, indabamusic, foursquare and Quora were affected due to this major outage. This was a concerning situation for Amazon

With cloud computing gaining more  traction, enterprises  are increasingly deploying their services on AWS due to the benefits the cloud provides in terms of ’cost savings, elasticity and faster times to market’. But this AWS outage has raised lots of questions and doubts in the minds of current customers  as well as potential cloud users regarding the reliability of the cloud.

What lessons can we learn from this failure? Is the cloud something that we can rely on? The answer is simple as

“yes we can!”

There were other well known and successful  customers such Netflix that  use AWS to offer their services; but their services were not affected by the outage. Why? Because they were lucky to run their services in other Amazon Region or because they designed their systems  in order to deal with failures and provide  business continuity.

Enterprises should have designed their systems robust and prone to any failure. That would allow them to keep their business up and running when such failures happen. The nice thing is that AWS  allows you to design fault tolerant architectures.

Moving an application to the Cloud it is not a trivial thing and it does not merely imply re-locating the service to the cloud and rely on the availability of the  cloud service provider. Such approach is all right for services that are not the “core business” –  “mission critical” or for companies whose services can be down for several hours and lose probably some data,  without compromising their business.

Many services  are being moved  to the cloud in order to get the benefits the cloud provides nowadays: High availability; ability to scale (orders of magnitude increase in usage and/or users); performance; faster deployment times etc. Although Amazon is responsible for the interdependencies between Availability Zones, many companies failed to address that the other part of the problem relies in their deployed architectures in the cloud.

In The Server Labs we think that the key for a successful architecture in the cloud is ‘Design for Failure’ ,  as you would do it for any other distributed system.  Although the outage left companies out of business during hours and affecting their P&L, still they do not follow that golden rule. The reasons behind this might be one of these. Or they  lack of technical knowledge in complex architectures / distributed systems / cloud computing architectures,  or they can not afford the  operational costs associated to a global high-availability architecture in Amazon AWS.

There are a some approaches to architecting high availability architectures on Amazon´s cloud.

Use multiple Availability Zones

This  approach is about to have  the business services running across multiple Availability Zones on AWS  (like US West 1A and US West 1B). The failure in a particular zone will redirect the traffic to a different zone that is stable.  This is a cost effective solution (in comparison to the second approach of distributing  business services across multiple “Availability Regions”).  However this approach may not be sufficient when the entire availability zone goes down (US-East), as it happened in the recent outage

Use multiple Availability Regions

The second approach is to run the application across multiple “Availability Regions”. In this case the  service is hosted on multiple AWS´s regions (like  US West and Europe). It is possible to have geo-distributed traffic and high availability, across continents with this setup. This configuration is recommended for companies that need a high level of scalability, load balancing and world wide user access requirements.  In the case of a failure at one region, the traffic can be redirected to other stable regions. This approach would have addressed the latest AWS outage scenario, and is the one used by several companies that were not affected by the last outage.

Both approaches offer a simple view of the possibilities of designing fault tolerant cloud architectures. Of course, the design could be completed (involving multiple public clouds or hybrid solutions) taking into consideration the business needs

As for any distributed system,  cloud architects should bear in mind some key points:  As a rule of thumb “Avoid single point of failure unless your business can live with it”.  Also the architecture should not compromise scalability and availability. Any fail-over mechanism will result in additional costs.

At the end, High Availability and Scalability is a trade-off between higher costs of infrastructure (the opportunity cost / opportunity benefit)  versus the benefit of not losing customers/revenue in case of a failure.

The AWS outage will force the enterprises to  focus to the importance/needs of robust, well defined & designed architectures that are going to be deployed  in the cloud.  If Enterprise Cloud Architects take into consideration and address multiple possibilities of failures, nothing will really fail completely.  With this approach in mind,  those companies will get the most benefit from the cloud.

The Server Labs @ The European Ground System Architecture Workshop (ESAW) 2011

We recently participated in The European Ground System Architecture Workshop (ESAW) 2011 that was held at ESOC, Darmstadt, Germany, on 10 and 11 May 2011.

With  nearly 300 Ground Systems architects and experts from European and American Space Agencies, Telecommunication Operators, Satellite Primes, European Institutes and Universities, European Industry and companies from USA, Canada and many other countries around the world, the workshop was a great success and a platform for open exchange of ideas and concepts. It included numerous topics related to ground segment architecture (MCS, FDS, MPS, Back-end, EGSE, etc.), Cloud, Service Oriented Architectures SOA and message-oriented architectures, modelling, ground software systems ( Language trends, Commercial off-the-shelf, open source components, and operational SW,  Software reuse, Emerging ground system technologies and harmonisation ), data system harmonisation, security and information assurance, automation and integrated services, interoperability and standards, Intellectual Property Rights, Licensing and Third Party rights management.

Cost reduction was one of the key issues throughout the conference together with the general perception that the “good days” have gone. The message was clear: do more and better with less money. The cost reductions not only affect Science missions, but also commercial projects which need to be profitable. Therefore all the system architectures, technologies and trends aligned the benefits of their proposals with the fact that they would bring down costs in software development, management, maintainability etc.

The common idea among all the Software cost saving proposals was the use of a common core (EGS-CC), so all the future missions would share the costs and save money. This is key, since there are synergies among all the missions.

We at The Server Labs, have once more participated actively in the workshop contributing with both a technical presentation and the creation of a large-scale poster explaining the application of Cloud computing to Ground System Architectures. The use cases we presented in the poster generated a lot of interest as did the outline of how Cloud Computing could help the Ground System Architectures to obtain Faster delivery of services, increase Service Elasticity, to provide Self-service provisioning and management & Self-management and automatic scaling, and to benefit of the associated Cost Reduction from the resource user’s perspective, as it allows a pay per use billing model.

We also delivered a presentation on SOA4GDS “Evaluating the suitability of emerging service based technologies in ground data systems” The SOA4GDS project is a study for the ESA Basic Technology
Research Project (TRP) which has been conducted jointly by The Server Labs and VEGA to evaluate the suitability of emerging service-based technologies, like for example SOA, for the technical requirements of ESA/ESOC’s ground data systems.

We definitively think that the workshop was a great success and look forward to participating in the next one.

See you again in 2013!

Amazon HPC Cluster Compute instance makes the Top500 SuperComputer list

The november issue of the Top500 Supercomputer sites has just been released and there is a new entry in at number #231.

Rank Site Computer/Year Vendor Cores Rmax Rpeak
231 Amazon Web Services
United States
Amazon EC2 Cluster Compute Instances – Amazon EC2 Cluster, Xeon X5570 2.95 Ghz, 10G Ethernet / 2010
7040 41.82 82.51

The AWS EC2 Cluster Compute Instance is now officialy a supercomputer! AWS’s cluster compute instances are based on dual quad-core Nehalem chips and have 10Gbit ethernet interconnect.

In the table above they quote 7040 cores which means that they used 880 cluster compute nodes for the test.

This has means that any company requiring supercomputing processing power can afford to rent the entire cluster of 8808 nodes from Amazon for less than $1500. All they require is a credit card.

The implications for this are huge. It opens up supercomputing to the masses.

Amazon launch GPU processing

Today has been quite a day of news from the Amazon camp. They have also announced the immediate availability of Cluster GPU instances for HPC processing. A lot of our customers have expressed an interest in trying out their algorithms on GPUs and now they can do so without having to make a huge investment. The new instance types are based on the nVidia Fermi architecture as well as “normal” dial quad-core Nehalem CPUs.

Cluster GPU Quadruple Extra Large Instance
22 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
2 x NVIDIA Tesla “Fermi” M2050 GPUs
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cg1.4xlarge

They are currently ony available in us-east-1, the North Virginia region.

In the near future we will be writing an entry on our technical blog going into the technical details.

IDC Conference on Cloud Computing – Madrid

May 27th, 2010 by Alfonso Olias

2010 might be the turning point for Cloud Computing, especially in Spain. The current economical situation is pushing both SMEs and large corporations to take a closer look at their businesses in order to become more competitive – maximizing their investments and reducing their expenditures. The optimization of their free cash flow is a must. As long as their technology systems keep getting obsolete, companies would need to invest heavily into IT (to purchase new infrastructure, licenses etc.) Though the reality is that companies, in an attempt to reduce their expenses, decide to keep their current infrastructure whilst increasing their costs in support and probably losing competitiveness. It is here where the Cloud addresses the issues brought on by the crisis. Organizations need real solutions to solve their problems and help improve their competitiveness.

On the 27th May we had the opportunity to attend the IDC Conference on Cloud Computing, where almost all the big players of the Cloud met sharing their vision and strategies for the Cloud.

One of the presentations that best illustrated the advantages of the Cloud was given by Codere who outsourced their IT into an external virtual private Cloud infrastructure whilst keeping control of the management. Several years ago they started evaluating virtualization and consolidation technologies in order to maximize their CapEx in IT. Spending too much money and effort in an attempt to build up their own private Cloud, they realized that they needed to refocus on their core business.  After assessing the current Cloud offering they moved to the BT Cloud. (BT enables the creation of a virtual data center in 4 hours, and even you can set service level 1 and 2.) As a  result Codere made savings of 40% compared to their in-house IT infrastructure and a payback in less than 14 months.

Cloud vendors such as IBM and HP presented their solutions as a product mix between Cloud and the traditional outsourcing model. Their targeted customers being large corporations, they are offering virtual private clouds.  The only difference was the new buzz word introduced by HP, TaaS, Test as a Service.

At The Server Labs we have been using the cloud for testing for well over a year, and with very  good results. It is a good way to assess your services without incurring in CapEx, and with shorter testing times. Just think how long takes to set up a testing environment in-house, when at the same time other public cloud vendors can provide IaaS within minutes.

Microsoft’s cloud strategy has focused on Microsoft Azure, targeting both big companies and SMEs. Apart from offering IaaS, also they are moving into SaaS with their email service and office suite.

Google presented their cloud services portfolio (SaaS with gMail etc, and PaaS based on GoogleApps). For the time being, they have still not moved into IaaS in order to compete with Amazon or Microsoft, but who knows which will be the next step of Google in the Cloud. Many traditional companies in Spain are moving to Google Apps.  The government of Extremadura presented how they are using the Cloud with Google Apps. E.g. they are offering online services such “cv reviewing”, where job seekers and the public employees meet in the cloud through Google docs to review the curriculum online and in real time, saving lot of time and  avoiding unnecessary queues. That is productivity! According to Forrester companies can save around 229% with Google Apps in comparison with traditional on-premise email solutions.

Legacy Cobol applications can be moved to the Cloud thanks to the efforts done by Microfocus, allowing  companies that have legacy applications to run them in Amazon or Microsoft Azure. This might benefit the Banks as they will not have to upgrade their mainframes.

Other companies such LILLY are using the Amazon EC2 /S3 public cloud for High Performance Computing (HPC), as we do at the European Space Agency. We had the opportunity to see a real demo. This is something to appreciate, because only BT and Terremark showed us a demo of their services. LILLY benefits from the cloud the same way we do, running experiments on demand without having to wait weeks or even months to have access to a ready to use infrastructure in their in-house Data Center.

Another interesting point at the conference was a round table discussion about the threats and potential show-stoppers of the cloud adoption. The main concerns that came up were about the cloud vendor lock in, the software licensing, data ownership, data location and LOPD (regulations about data protection of customers) security, disaster recovery etc. Before moving to a public cloud, you need to understand these concerns, the implications to your core business and, of course, ask a lawyer before you move data into the cloud.

In conclusion, we believe that though there are still many barriers and challenges to solve regarding the offer and the demand, the cloud is mature enough to provide any company using it with real advantages. Private clouds will make sense mid-term or as long as the companies have to amortize their IT infrastructure. But there is no doubt that public clouds will become the standard in the long run. One simply cannot get the economies of scale big vendors have and building up a private cloud requires a substantial investment in IT (CapEx), in addition to licenses for software, hardware, electricity, cooling, sys admins, etc.

We will talk more about this in future posts.

Cloud Computing Expo 2010 – New York City

Three full days of presentations with 07:30 starts and 19:00 finishes flew in surprisingly fast. Whether I was hearing about a new technology solution or a new business model based on private, public or hybrid clouds, it was all interesting. There are a lot of companies like The Server Labs, looking to grow with the cloud and a lot of IT people like you wondering what the cloud is and how they can put it to good use. The vendors made a decent job of educating the audience while explaining their point of view and ultimately pitching their products. At least they weren’t listing features and giving demos during the presentations, you had to go to the expo floor for that and the iPad raffles of course.

Around 70 vendors, large and small came from all over with solutions ranging from public and private clouds to security, monitoring and virtualization management software. It’s obvious that no one vendor has a complete soup to nuts cloud solution to fit every situation and that you have a lot of work to do just to understand what you need. The good news is that system integrators (like us) that are ahead of the curve are in a position to help you cut through the haze and get straight to the set of technologies that is right for your specific situation.

There is still an enormous amount of confusion over what exactly “The Cloud” is and this will continue for the foreseeable future. What is clear is that a handful of early adopters are reaping huge benefits because they had a specific use case that was perfect for the cloud and they took the risk to capitalise on it.

Amazon Web Services Extends Global Coverage

With the latest addition of their first Asia-Pacific Region based in Singapore, Amazon Web Services connects global businesses with their customers and partners in Asia as well as providing the powerful compute services to Asian businesses.

This exciting announcement is more proof that cloud computing is gaining traction worldwide and helping reduce the cost of doing business globally.

See full press release:

Welcome to The Server Labs Biz Blog

The Server Labs has long been known for our technical prowess which has been clearly demonstrated with our customer successes and our excellent technical blog entries from our talented consultants. However, as IT continues to evolve and becomes a more prominent member of the corporate culture we realised that we need to also address IT Architecture and Cloud topics taking a less technical but more business relevant angle.

The purpose of our new Biz Blog is to help share our observations and opinions as to how technology advances and adoption are changing the way IT looks and feels. Don’t worry, the techie blog continues to live on and provide excellent reference materials for solving tricky problems.

We invite you to participate and add comments and questions to our blog entries.