by Jay Judkowitz
This is the second in a series of four articles discussing infrastructure as a service (IaaS) clouds. The series started at basic level setting and we will now begin diving progressively deeper. The topics for the series are:
1. Cloud 101
- What is cloud
- What value should cloud provide
- Public, private, and hybrid cloud
- Starting on a cloud project
2. Application taxonomy, what belongs in the cloud, and why
3. What you should look for in cloud infrastructure software
4. Evaluating different approaches to cloud infrastructure software
Cloud is obviously a serious transformation for your datacenter, but that transformation does not need to be far off or futuristic. If you pick the right applications owned by the right users with the right needs and if you partner with the right cloud software provider, cloud and its many benefits are achievable today.
When first deploying a cloud, it is critical to choose the right applications to move to the cloud initially. Given that cloud is about allowing business units to manage their own computing needs, it follows that the ideal place for cloud is where the business units:
- Expect self-service.
- Are tolerant of incorporating provisioning logic into their day-to-day work.
- Have variable compute needs for the tasks that generate the need for frequent provisioning and de-provisioning activities.
- Have multiple tasks that cause context switching in the customers’ work, with different tasks acquiring and yielding compute resources over time.
- Have workloads that, even at maximum interaction, do not cause complex resource contention issues on shared resources so that as much or as little can be deployed as necessary with little if any forethought or calculation.
If you break down the types of workloads in a datacenter, you get three major types:
- Traditional, monolithic, and stateful client/server apps – things like Exchange Servers and traditional databases
- Scale-out load balanced apps with disposable stateless instances
- Batch type computing jobs that can be decomposed into small chunks of compute and storage and distributed across a pool – things like Hadoop, Monte Carlo simulations, business analytics, and media processing and conversion
For each class of application, there are dev/test deployments and production deployments. This gives us a simple six-way classification of what runs in a datacenter that we can use to select our cloud candidates.
The following chart shows which workloads are good for early cloud adoption and what should be dealt with later on in the process. Beneath the chart is some explanation and justification for this assessment.
(Green represents near-term opportunity and red represents something that should be sent to the cloud later on)
Scale-out Load Balanced Apps with Disposable Instances
This type of application does not rely on any one instance of software being able to grow to use tremendous amounts of compute resource. Rather it assumes that each instance can only do so much and that additional power will be supplied by adding more instances. For this sort of application to work the data and other application state must be driven out of the application itself to another location.This allows instances to be created and destroyed with no data loss or outage to the end user. The worst problem caused by failure of an instance is the need for the end user to retry their operation. When the operation is retried, it finds a live instance and completes without difficulty.
Generally, the application requires greater or lesser instances over time as load grows and shrinks. This creates a need for dynamism in the deployment with low turn-around time on provisioning operations. As load grows, the application must respond in a short period of time. Either the application administrator or some auto-scaling management system needs to be able to make the required changes without a ticket to the infrastructure team. When load is high, new instances must be spawned to handle the increased demands. When load drops, instances should be deleted to reclaim compute resources for more useful activity. Because the instances are stateless, without sufficient load that needs to be serviced, no instance has an inherent reason to remain persistently deployed.
Scale-out applications are also a great fit with the datacenter operations models required to manage a massive cloud. Cloud scale datacenters need to assume that any piece of equipment can break at any time with the application being resilient and able to hide the failure from the end user. Scale-out applications accomplish this by spreading many instances across nodes, datacenters, or even geographies. This makes it much simpler for the infrastructure to be managed – failures become a capacity issue which can be managed in aggregate on a periodic basis, not end-user outage issues that need to be addressed immediately and individually.
More generally, this type of application is the way of the future in general. Due to current systems architectures, we are seeing more and more cheaper cores distributed across many commodity nodes rather than the massive scaling up of individual servers. The only way to really get applications to scale is to build for many smaller instances teaming up together. Some programming infrastructures take this to a logical conclusion. Node.js, a language increasingly used for scale-out web applications, refuses to give a program access to more than one core regardless of how many cores are in a system. Should the developer need more power, they need to increase the number of node.js application instances.
Scale-out applications and cloud are such a good match because they have the same goals – elastically, scale up and down in response to load using only the resources actually needed, distributing applications across hosts and sites to better tolerate systems outages without end user impact and better conformation to modern system architecture.
Batch applications are decomposable to smaller compute and storage packages. They are good in both test/dev and production for similar reasons as the scale-out applications. As jobs are launched, the number of instances depends on how fine grained you can chunk the job. Given different sizes of data sets for separate runs, the number of instances needed for each run will vary. Therefore, statically provisioning instances doesn’t make sense. Furthermore, there are likely to be completely different applications in such an environment that will need to run at different times. Clouds are perfect for repurposing an infrastructure, ramping up one application while ramping down another without having to do a massive retooling of the physical infrastructure. Like with the last case, the infrastructure administrators should not decide how many and when to deploy instances. Activity needs to be driven by the business that is actually running the jobs and deriving benefit from the output.
Traditional IT Applications
Traditional IT applications, vertically scaled, stateful and monolithic are not good for clouds in their production deployments. These applications tend to be custom built for a particular load in mind, deployed once, expected to have each instance stay running persistently, and are only touched for upgrade. These types of applications are not a natural fit for the cloud paradigm. They do not accommodate dynamic scaling by simply adding more instances. As a result there is a reduced need for end-user self-service. Also, since they are monolithic, any performance or availability problem must be addressed in the one or two instances that make up the application with a deep understanding of the impact of the underlying infrastructure. As a result, the burden of management remains on the infrastructure administrator and cannot shift the application administrator. This breaks the operational model of cloud where the datacenter administrator needs to be removed from the details of the running of applications.
However, traditional IT applications are very well suited to cloud when the service is development, integration, and testing of the applications. For this function, application owners need to deploy new instances of server software, update them, make new templates, try out their work and iterate. Furthermore, mass numbers of clients will need to be deployed for load and scale testing. The development and testing process generates the dynamism in workload resource needs as well as the requirement for self-service on the part of the IT developer that make a cloud an ideal solution.
Pick the Right Applications and Move to Cloud Today!
Too many times cloud is pitched as an evolutionary technology. In many cases, this is because the vendor making this pitch is already managing a legacy application stack for the customer and sees no reason for a radical shift.
Since these legacy applications do not accommodate elasticity and do not tolerate the more unpredictable availability of any single server that the cloud datacenter operations model implies, true clouds are limited in the benefits they can provide and cause a loss of SLA that is unacceptable to the end users of the legacy applications.
Of course, legacy applications will not go away any time soon and we acknowledge that it takes tremendous time and effort to move to a new programming paradigm. But, the technology is here today and the benefits have been made obvious – scale, resiliency, efficiency. The success stories of companies like Netflix and Zynga are well known. All that is needed is the will to move in that direction.
For enterprises and service providers that leverage modern applications development process, cloud is not an evolution at all – cloud is the best and most obvious way forward for development, testing and mass deployment of their applications.
Pick your target applications and get started today!