Monthly Archives: June 2010

The End of Over-Provisioning

One part of the debate on cloudonomics that often gets overlooked is the effect of over-provisioning. Many people look at the numbers and say they can run a server for less money than they can buy the same capacity in the cloud. And, assuming that you optimize the utilization of that server, that may be true for some of us. That that’s a very big and risky assumption.

People are optimists – well, at least most of us are. We naturally believe that the application we spend our valuable time creating and perfecting will be widely used. That holds true whether the application is internal- or external-facing.

In IT projects, such optimism can be very expensive because we feel the need to purchase many more servers than we typically need. On the other hand, and with the typical lead time of many weeks or even months to provision new servers and storage in a traditional IT shop, it’s important to not get caught with too little infrastructure. Nothing kills a new system’s acceptance more than poor performance or significant downtime due to overloaded servers. The result is that new systems typically get provisioned with far more infrastructure than they really need. When in doubt, it’s better to have too much than too little.

As proof of this it is typical for an enterprise to have server utilization rates below 15%. That means that, on average, 85% of the money companies spend with IBM, HP, Dell, EMC, NetApp, Cisco and other infrastructure providers is wasted. Most would peg ideal utilization rates at somewhere in the 70% range (performance degrades above a certain level), so that means that somewhere between $5 and $6 of every $10 we spend on hardware only enriches the vendors and adds no value to the enterprise.

Even with virtualization we tend to over-provision. It takes a lot of discipline, planning and expense to drive utilization above 50%, and like most things in life, it gets harder the closer we are to the top. And more expensive. The automation tools, processes, monitoring and management of an optimized environment require a substantial investment of money, people and time. And after all, are most companies even capable of sustaining that investment?

I haven’t even touched on the variability of demand. Very few systems have a stable demand curve. For business applications, there are peaks and valleys even during business hours (10-11 AM and 2-3 PM tend to be peaks while early, late and lunchtime are valleys). If you own your infrastructure, you’re paying for it even when you’re not using it. How many people are on your systems at 3:00 in the morning?

If a company looks at their actual utilization rate for infrastructure, is it still cheaper to run it in-house? Or, does the cloud look more attractive. Consider that cloud servers are on-demand, pay as you go. Same for storage.

If you build your shiny new application to scale out – that is, use a larger quantity of smaller commodity servers when demand is high – and you enable the auto-scaling features available in some clouds and cloud tools – your applications will always use what they need, and only what they need, at any time. For example, at peak you might need 20 front-end Web servers to handle the load of your application, but perhaps only one in the middle of the night. In this case a cloud infrastructure will be far less costly than in-house servers. See the demand chart below for a typical application accessed from only one geography.

So, back to the point about over-provisioning. If you buy for the peak plus some % to ensure availability, most of the time you’ll have too much infrastructure on hand. In the above chart, assume that we purchased 25 servers to cover the peak load. In that case, only 29% of the available server hours in a day are used: 174 hours out of 600 available hours (25 servers x 24 hours).

Now, if you take the simple math a step further, you can see that if your internal cost per hour is $1 (for simplicity), then the cloud cost would need to be $3.45 to approach equivalency ($1 / 0.29). A well-architected application that usea autoscaling in the cloud has the ability to run far cheaper than in a traditional environment.

Build your applications to scale out, and take advantage of autoscaling in a public cloud, and you’ll never have to over-provision again.

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

Tagged , ,

The Cloudification of IT

The state of solid matter can be converted to gas or liquid if a catalyst (chemical, heat, etc.) is applied.  The molecules start to speed up, eventually breaking the bonds that hold them together.  This liquefaction (conversion to liquid state) or gasification (conversation to gaseous state) enables solid matter to flow more freely, to take on more dynamic variability.

Cloud computing can be the catalyst that transforms IT to a more dynamic and flexible state, enabling responsiveness and value creation not possible in its current solid form.  The cloud can free IT from the shackles of capital budgeting, the tax of application maintenance, the root canal of software upgrades and the drain of constant infrastructure refresh cycles.

A cloud or consumerization approach to end user devices can extend this transformation.  Subsidizing employee purchases of PCs, laptops and smartphones can eliminate much of the cost of helpdesks and maintenance depots.  Provisioning virtual desktops to these devices further reduces the cost.  Employees get the most up-to-date equipment, and can get their support from the place they bought it.

In some ways, small and mid-sized enterprises (SMEs) are far more likely to achieve cloudification in the near term.  They have less in-house infrastructure to begin with and get far more value from “using” IT than acquiring it.  The lack of capital also has a part to play. When you’re small, you’re less likely to have the flexibility to invest in systems with a long payback period.

By letting others (e.g. service providers) wrestle with these issues, enterprise IT can focus far more on value creation through new systems and capabilities.  It can also focus more on governance and security.  A truly “cloudified” IT function is more nimble, productive, and efficient than a traditional environment.

%d bloggers like this: