Monthly Archives: August 2009

Private Cloud for Interoperability, or for “Co-Generation?”

There has been a lot of good discussion lately about the semantics of private vs. public clouds.  The general issue revolves around the issue of elasticity.  It goes something like this: “If you have to buy your own servers and deploy them in your data center, that’s not very elastic and therefore cannot be cloud.”  Whether or not you buy into the distinction, pivate clouds (if you want to call them that) do suffer from inelasticity.  Werner Vogels in his VPC blog post debunks the private cloud as not real:

“Private Cloud is not the Cloud

These CIOs know that what is sometimes dubbed “private cloud” does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher.”

What if we were to look at the private cloud concept as an interoperability play?  If someone implements a cloud-like automation, provisioning and management infrastructure in their data center to gain many of the internal business process benefits of cloud computing (perhaps without the financial benefits of opex vs. capex and elastic “up/down scaling”), it still can be a very valuable component of a cloud computing strategy.    It’s not “the Cloud” as Werner points out.  It’s just part of the cloud.

To realize this benefit requires a certain degree of interoperability and integration between my “fixed asset cloud” and the public “variable expense cloud” such that I can use and manage them as a single elastic cloud (this is what is meant by “hybrid cloud”).  Remember that enterprises will always need some non-zero, non-trivial level of computing resources to run their business.  It is possible that these assets can be acquired and operated over a 3-5 year window at a lower TCO than public cloud equivalents (in terms of compute and storage resource).

Managing these fixed + variable hybrid cloud environments in an interoperable way requires tools such as cloud brokers (RightScale, CloudKick, CloudSwitch, etc.).  It also requires your internal cloud management layer to be compatible with these tools.  Enterprise outsourcers like Terremark, Unisys and others may also provide uniform environments for their clients to operate in this hybrid world. In hybrid you get the benefits of full elasticity since your view of the data center includes the public cloud providers you have enabled. You may choose to stop all new capex going forward while leveraging the value of prior capex (sunk costs) you’ve already made.  In this context, private cloud is very much part of your cloud computing strategy.  A purely walled off private cloud with no public cloud interoperability is really not a cloud computing strategy – on this point I agree with Vogels.

Co-Generation:  Selling Your Private Cloud By the Drink

Now, assuming you’ve built a really great data center capability, implemented a full hybrid cloud environment with interoperability and great security, what’s to stop you from turning around and selling off any excess capacity to the public cloud?  Think about it – if you can provide a fully cloud-compatible environment on great hardware that’s superbly managed, virtualized, and secured, why can’t you rent out any unused capacity you have at any given time?  Just like electricity co-generation, when I need more resources I draw from the cloud, but when I have extra resources I can sell it to someone else who has the need.

You might say that if your cloud environment is truly elastic, you’ll never have excess capacity.  Sorry, but things are never that easy.  Today large enterprises typically have very poor asset utilization, but for financial and other reasons dumping this capacity on eBay does not always make sense.  So, what about subletting your computing capacity to the cloud?

Then, if I take all of the big corporate data centers in the world and weave them into this open co-generation market, then instead of buying instances from Amazon, Citigroup can buy them from GE or Exxon.  What if you need a configuration that is common in the enterprise, but not in the cloud (e.g. true enterprise class analytic servers with 100TB capacity), perhaps you can rent one for a few days.  It may be more cost-effective than running the same job on 300 Ec2 instances over the same timeframe.

There may be many reasons why the co-generation cloud computing market may never evolve, but those reasons are not technical. Doing this is not rocket science.

Tagged , , , ,

Cloud BI & Amazon VPC – Low Hanging Fruit for the Enterprise

Today RightScale did a webinar on their Cloud Business Intelligence offering with Talend, Jaspersoft and Vertica.  One of the bigger objections to cloud BI in the past has been security — how can I move all of this mission critical data to a public insecure cloud?

With Amazon VPC now in the picture, the BI datasets are now as secure at Amazon as they are in your data center.  Why wouldn’t you use the cloud for your BI needs?

Tagged , , , , ,

Cloud Computing Announcement of the Year – Amazon Virtual Private Cloud!

amazonnumber1-bLast night Amazon announced the most significant cloud development of 2009 – the Amazon Virtual Private Cloud (VPC). The AWS Developer Blog version is here.  The importance of VPC cannot be overstated.  It will literally change how enterprises think about public cloud providers and the opportunity to gain efficiency and flexibility in datacenter operations.

By integrating with the security, governance and compliance infrastructures of enterprise IT, VPC eliminates one of the primary barriers to cloud adoption for mainstream business computing. Sure, there are still going to be issues, but this was the big one.

I won’t rehash all of the offering details here.  You can read them on Werner Vogels blog and TechCrunch.

The hybrid cloud is a reality.  You can now integrate your internal fixed IT infrastructure with large external clouds with a high degree of integration with enterprise tools. VPC allows you to assign IP addresses, create subnets, and connect your existing data centers to Amazon using secure VPN technology. Sure, this is not the same level of connectivity via dedicated secure lines that most big outsourcers provide, but it’s pretty strong and many very smart people (including Chris Hoff at Cisco) are bullish from a security perspective.

VPC_Diagram

I will think about this a bit more, but Werner Vogels makes the claim that “private clouds are not clouds” mainly because they are not truly elastic.  There may be some benefits to using Eucalyptus or VMware’s vSphere in your data center, but you still need to buy hardware and install it and that’s not cloud computing according to Vogels.

One thing that’s certain, the game has changed – again!  Amazon’s VPC is far and away the most significant cloud computing announcement so far this year, and I’m going to go out on a limb and predict that on December 31 it will still hold that distinction.

What do you think??

Tagged , ,

Deep Data from InfiBase

Update: InfiBase has ceased operations, but the analyses they are providing may continue.  Stay tuned.

A stealth start-up called InfiBase has published some very interesting data on their blog recently. It makes me want to know more about them, so if you have the scoop let me know.

First, they have put out two posts on sites using Amazon EC2, with other cloud providers included in the last posting earlier this month. Here is their chart showing the top 500,000 sites by cloud providers.  Note how close Amazon EC2 and Rackspace CloudServers (based on Slicehost) are in this ranking.

cloud_providers5

Source: InfiBase

I was interested to see Joyent in third place, well ahead of both Google and GoGrid, and I wonder what this might look like a year from now.

In another post InfiBase performed a deep dive into the processing dynamic of various EC2 instances, including which processors are being used and how they stack up.  Here is just one of their great charts which shows that AMD processors are used at the low end of EC2 while Intel takes over at the very high end.

amd_intel_processor_by_instance1

Source: InfiBase

With the data they are previewing in their blog (see the full posts there), I am intrigued.

Tagged , , , , , , , , ,

Cloud-Washing at Salesforce.com

As a general rule, I am happy to count Salesforce.com as a cloud computing company.  They really made the SaaS market what it is today, and their Force.com platform-as-a-service was a great innovation.  They are not an infrastructure cloud provider like Amazon, Rackspace or others, but okay – they’re a cloud company.

However, when I see their current marketing and branding it makes me want to chuckle.  Instead of Salesforce, Successforce, and Force.com, they now market Sales Cloud, Service Cloud, and Custom Cloud.  They already had the cloud creds, but by trying so hard it makes them look a bit silly.  I wonder if this rebranding is hurting or helping their sales numbers…

Salesforce

Skytap Does Window (7)

Skytap announced today a Windows 7 cloud-based testing solution for ISVs and corporate developers.  Testing is one the oft-cited use cases for cloud computing in the enterprise.  For many companies the cost of provisioning and managing testing infrastructure can be very expsnsive.  With Windows 7 due in a few months, and many reviewers giving it big thumbs up over Vista, there may be a huge opportunity for Skytap to help companies get ready for this conversion.

If you are an ISV or corporate development organization needing to support Windows 7, you should check out Skytap.

SaaS v. Cloud Should Not Be Contentious…

Chris Hoff has a new post over at Rational Survivability where he attempts to make sense of when a SaaS solution should or should not be considered “cloud.”  In his analysis, Hoff atttempts to strictly apply NIST’s cloud computing definition to various types of SaaS offerings (say hosted email vs. Salesforce.com).

I think that this approach, while intellectually interesting, is perhaps a bit off the mark.  While NIST’s framework for cloud computing is generally accepted on the surface, the lower-level distinctions they make may not be so universally agreed upon.  A perfect example of this is contained in this NIST “essential characteristic” of cloud computing:

Location independent resource pooling. The provider’s computing resources are pooled to serve all consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. The customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

This is the one NIST cloud characteristic that most mixes requirements of users and providers – and applies most to IaaS and perhaps PaaS solutions.  How a SaaS provider manages their internal infrastructure to provision an on-demand, elastic, metered and internet-accessible application is irrelevant.  SaaS need not be built on IaaS foundations to be considered a cloud computing service.

NIST aside, I think that the bigger issue is whether, from a business perspective (vs. technical), the following question from Hoff has any meaning:

If a SaaS offering is not built upon an IaaS/PaaS offering that is itself characteristically qualified as Cloud per definitions like NIST, is it a Cloud SaaS offering or just a SaaS in Cloud’s clothing?

If you are a CIO, CEO or other executive tasked with choosing between solving your requirements with in-house software & systems or a SaaS solution, does how the SaaS solution is architected internally really matter?  I’m assuming that the technical requirements that do matter to the buyer – functionality, scalability, reliability, etc. – are addressed adequately, of course.  Beyond that, why do I care if the underlying technology meets NIST’s cloud definitions for IaaS?

The answer is simple – I don’t care.   If an application meets the generally accepted pre-cloud definition of SaaS (provided generally in an on-demand, elastic, metered and internet-accessible basis), then as far as I’m concerned, it’s cloud.

Under the covers it could be running on a single IBM mainframe and I’d still call it a cloud application when viewed through the lens of the business.

And ultimately, isn’t that what matters most?

%d bloggers like this: