Tag Archives: private cloud

IaaS Cloud Litmus Test – The 5 Minute VM

I will make this simple.  There is only one question you need to ask yourself or your IT department to determine if what you have is really an Infrastructure-as-a-Service cloud.

Can I get a VM in 5-10 minutes?

Perhaps a little bit more detailed?

Can a properly credentialed user, with a legitimate need for cloud resources, log into your cloud portal or use your cloud API, request a set of cloud resources (compute, network, storage), and have them provisioned for them automatically in a matter of a few minutes (typically less than 10 and often less than 5)?

If you can answer yes, congratulations – it’s very likely a cloud.  If you cannot answer yes it is NOT cloud IaaS. There is no wriggle room here.

Cloud is an operating model supported by technology.  And that operating model has as its core defining characteristic the ability to request and receive resources in real-time, on-demand. All of the other NIST characteristics are great, but no amount of metering (measured service), resource pooling, elasticity, or broad network access (aka Internet) can overcome a 3-week (or worse) provisioning cycle for a set of VMs.

Tie this to your business drivers for cloud.

  • Agility? Only if you get your VMs when you need them.  Like NOW!
  • Cost? If you have lots of manual approvals and provisioning, you have not taken the cost of labor out.  5 Minute VMs requires 100% end-to-end automation with no manual approvals.
  • Quality? Back to manual processes – these are error prone because humans suck at repetitive tasks as compared to machines.

Does that thing you call a cloud give you a 5 Minute VM?  If not, stop calling it a cloud and get serious about building the IT Factory of the Future.

“You keep using that word [cloud].  I do not think it means what you think it means.”

– The Princess Cloud

 

 

(c) 2012 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , , ,

The End of Over-Provisioning

One part of the debate on cloudonomics that often gets overlooked is the effect of over-provisioning. Many people look at the numbers and say they can run a server for less money than they can buy the same capacity in the cloud. And, assuming that you optimize the utilization of that server, that may be true for some of us. That that’s a very big and risky assumption.

People are optimists – well, at least most of us are. We naturally believe that the application we spend our valuable time creating and perfecting will be widely used. That holds true whether the application is internal- or external-facing.

In IT projects, such optimism can be very expensive because we feel the need to purchase many more servers than we typically need. On the other hand, and with the typical lead time of many weeks or even months to provision new servers and storage in a traditional IT shop, it’s important to not get caught with too little infrastructure. Nothing kills a new system’s acceptance more than poor performance or significant downtime due to overloaded servers. The result is that new systems typically get provisioned with far more infrastructure than they really need. When in doubt, it’s better to have too much than too little.

As proof of this it is typical for an enterprise to have server utilization rates below 15%. That means that, on average, 85% of the money companies spend with IBM, HP, Dell, EMC, NetApp, Cisco and other infrastructure providers is wasted. Most would peg ideal utilization rates at somewhere in the 70% range (performance degrades above a certain level), so that means that somewhere between $5 and $6 of every $10 we spend on hardware only enriches the vendors and adds no value to the enterprise.

Even with virtualization we tend to over-provision. It takes a lot of discipline, planning and expense to drive utilization above 50%, and like most things in life, it gets harder the closer we are to the top. And more expensive. The automation tools, processes, monitoring and management of an optimized environment require a substantial investment of money, people and time. And after all, are most companies even capable of sustaining that investment?

I haven’t even touched on the variability of demand. Very few systems have a stable demand curve. For business applications, there are peaks and valleys even during business hours (10-11 AM and 2-3 PM tend to be peaks while early, late and lunchtime are valleys). If you own your infrastructure, you’re paying for it even when you’re not using it. How many people are on your systems at 3:00 in the morning?

If a company looks at their actual utilization rate for infrastructure, is it still cheaper to run it in-house? Or, does the cloud look more attractive. Consider that cloud servers are on-demand, pay as you go. Same for storage.

If you build your shiny new application to scale out – that is, use a larger quantity of smaller commodity servers when demand is high – and you enable the auto-scaling features available in some clouds and cloud tools – your applications will always use what they need, and only what they need, at any time. For example, at peak you might need 20 front-end Web servers to handle the load of your application, but perhaps only one in the middle of the night. In this case a cloud infrastructure will be far less costly than in-house servers. See the demand chart below for a typical application accessed from only one geography.

So, back to the point about over-provisioning. If you buy for the peak plus some % to ensure availability, most of the time you’ll have too much infrastructure on hand. In the above chart, assume that we purchased 25 servers to cover the peak load. In that case, only 29% of the available server hours in a day are used: 174 hours out of 600 available hours (25 servers x 24 hours).

Now, if you take the simple math a step further, you can see that if your internal cost per hour is $1 (for simplicity), then the cloud cost would need to be $3.45 to approach equivalency ($1 / 0.29). A well-architected application that usea autoscaling in the cloud has the ability to run far cheaper than in a traditional environment.

Build your applications to scale out, and take advantage of autoscaling in a public cloud, and you’ll never have to over-provision again.

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

Tagged , ,

Private Cloud for Interoperability, or for “Co-Generation?”

There has been a lot of good discussion lately about the semantics of private vs. public clouds.  The general issue revolves around the issue of elasticity.  It goes something like this: “If you have to buy your own servers and deploy them in your data center, that’s not very elastic and therefore cannot be cloud.”  Whether or not you buy into the distinction, pivate clouds (if you want to call them that) do suffer from inelasticity.  Werner Vogels in his VPC blog post debunks the private cloud as not real:

“Private Cloud is not the Cloud

These CIOs know that what is sometimes dubbed “private cloud” does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher.”

What if we were to look at the private cloud concept as an interoperability play?  If someone implements a cloud-like automation, provisioning and management infrastructure in their data center to gain many of the internal business process benefits of cloud computing (perhaps without the financial benefits of opex vs. capex and elastic “up/down scaling”), it still can be a very valuable component of a cloud computing strategy.    It’s not “the Cloud” as Werner points out.  It’s just part of the cloud.

To realize this benefit requires a certain degree of interoperability and integration between my “fixed asset cloud” and the public “variable expense cloud” such that I can use and manage them as a single elastic cloud (this is what is meant by “hybrid cloud”).  Remember that enterprises will always need some non-zero, non-trivial level of computing resources to run their business.  It is possible that these assets can be acquired and operated over a 3-5 year window at a lower TCO than public cloud equivalents (in terms of compute and storage resource).

Managing these fixed + variable hybrid cloud environments in an interoperable way requires tools such as cloud brokers (RightScale, CloudKick, CloudSwitch, etc.).  It also requires your internal cloud management layer to be compatible with these tools.  Enterprise outsourcers like Terremark, Unisys and others may also provide uniform environments for their clients to operate in this hybrid world. In hybrid you get the benefits of full elasticity since your view of the data center includes the public cloud providers you have enabled. You may choose to stop all new capex going forward while leveraging the value of prior capex (sunk costs) you’ve already made.  In this context, private cloud is very much part of your cloud computing strategy.  A purely walled off private cloud with no public cloud interoperability is really not a cloud computing strategy – on this point I agree with Vogels.

Co-Generation:  Selling Your Private Cloud By the Drink

Now, assuming you’ve built a really great data center capability, implemented a full hybrid cloud environment with interoperability and great security, what’s to stop you from turning around and selling off any excess capacity to the public cloud?  Think about it – if you can provide a fully cloud-compatible environment on great hardware that’s superbly managed, virtualized, and secured, why can’t you rent out any unused capacity you have at any given time?  Just like electricity co-generation, when I need more resources I draw from the cloud, but when I have extra resources I can sell it to someone else who has the need.

You might say that if your cloud environment is truly elastic, you’ll never have excess capacity.  Sorry, but things are never that easy.  Today large enterprises typically have very poor asset utilization, but for financial and other reasons dumping this capacity on eBay does not always make sense.  So, what about subletting your computing capacity to the cloud?

Then, if I take all of the big corporate data centers in the world and weave them into this open co-generation market, then instead of buying instances from Amazon, Citigroup can buy them from GE or Exxon.  What if you need a configuration that is common in the enterprise, but not in the cloud (e.g. true enterprise class analytic servers with 100TB capacity), perhaps you can rent one for a few days.  It may be more cost-effective than running the same job on 300 Ec2 instances over the same timeframe.

There may be many reasons why the co-generation cloud computing market may never evolve, but those reasons are not technical. Doing this is not rocket science.

Tagged , , , ,

Cloud Computing Announcement of the Year – Amazon Virtual Private Cloud!

amazonnumber1-bLast night Amazon announced the most significant cloud development of 2009 – the Amazon Virtual Private Cloud (VPC). The AWS Developer Blog version is here.  The importance of VPC cannot be overstated.  It will literally change how enterprises think about public cloud providers and the opportunity to gain efficiency and flexibility in datacenter operations.

By integrating with the security, governance and compliance infrastructures of enterprise IT, VPC eliminates one of the primary barriers to cloud adoption for mainstream business computing. Sure, there are still going to be issues, but this was the big one.

I won’t rehash all of the offering details here.  You can read them on Werner Vogels blog and TechCrunch.

The hybrid cloud is a reality.  You can now integrate your internal fixed IT infrastructure with large external clouds with a high degree of integration with enterprise tools. VPC allows you to assign IP addresses, create subnets, and connect your existing data centers to Amazon using secure VPN technology. Sure, this is not the same level of connectivity via dedicated secure lines that most big outsourcers provide, but it’s pretty strong and many very smart people (including Chris Hoff at Cisco) are bullish from a security perspective.

VPC_Diagram

I will think about this a bit more, but Werner Vogels makes the claim that “private clouds are not clouds” mainly because they are not truly elastic.  There may be some benefits to using Eucalyptus or VMware’s vSphere in your data center, but you still need to buy hardware and install it and that’s not cloud computing according to Vogels.

One thing that’s certain, the game has changed – again!  Amazon’s VPC is far and away the most significant cloud computing announcement so far this year, and I’m going to go out on a limb and predict that on December 31 it will still hold that distinction.

What do you think??

Tagged , ,
%d bloggers like this: