Monthly Archives: July 2010

Cloudy View from HostingCon

I spent a couple of days in Austin at HostingCon, meeting with a broad cross-section of the hosting community.  Rackspace CTO John Engates and lots of other “Rackers” were there to promote OpenStack.  Most of the other big mass-market shared hosters were there too – like The Planet, Hosting.com and others.  Then there were lots of little guys.  Small hosting resellers, guys with a couple thousand feed of space inside a larger data center, etc. 

A good 40% of the conference content was about cloud.  But for people in the cloud business for the past few years, it might have felt a lot like 2007.  Lots of very basic information being shared/discussed, and a whole bunch of people who don’t know or don’t want to know.  I stopped by the cPanel booth in the expo.  cPanel is the #1 hosting control panel for this shared hosting business, with a gazillion hosters using their stuff.  I asked one of their exectutives if they were going to make it easy for their customer to move to a cloud model?  “Customers are asking us, but then we ask them what they mean by cloud and as soon as they can give us a straight answer maybe we’ll do that,” was his reply.  Okay, that’s a failure to lead if I ever saw one.

Some of the guys who do have clouds, like Hosting.com’s vCloud Express, are seeing substantial uptake in this new (to them) market.  Clearly we should be expecting a new wave of clouds to start appearing in the next few months.  The average revenue per user (ARPU) of cloud is so much higher than shared hosting, they can’t let it pass them by.  However, most of these guys are going to struggle to get there with a general lack of capability to develop what they need to make this work (given that none of the “cloud stack” solutions on the market today are as plug & play as a cPanel and require a lot of knowledge, skill and investment to get running).  Uh, opportunity calling??

I did learn a lot about the business models these guys are used to, which are somewhat different than what we’re all comfortable with in the cloud space – a flat fee per user /per module/capability used per month is a good summary.  Basically, you make money when the hosting guys are selling, not when they have servers that are ready but not being used.  The $xxx / year / socket model won’t work for these guys.

Another big part of this market is the hosting reseller business.  Something for the cloud guys to consider.  A little host called SingleHop actually ran a session about reselling their cloud, and ReliaCloud from MN was looking to do the same.  How many ways can you slice and dice it.  That brings me to a point about VMware and their VSPP program.  It won’t fly for long in this market at the current prices.  There’s not enough margin left for the reseller business – which is a huge issue here.

So, that’s about it from HostingCon 2010. 

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

OpenStack First Reaction – Rackspace Open Sources Their Cloud

Late yesterday, Rackspace launched OpenStack with a reasonable community of boosters.  OpenStack aims to disrupt the cloud stack red ocean with a complete open source release of the Rackspace CloudServers compute and CloudFiles object storage systems for use by anybody. 

Importantly, OpenStack is released under Apache 2.0, which basically means you can pretty much do as you please with the code – including commercializing it to a degree (e.g. charge for support).  As Krish tweeted to me last night – OpenStack is kind of the Apache of cloud stacks.

Lots of people are jumping on the bandwagon, and this could end up being a very big issue for a number of the stack providers I have listed previously.  My expectation is that this will serve the needs of service providers looking to deploy an SMB cloud offering similar to Rackspace Cloud, but that it won’t do much for the enterprise for some significant period of time.  Service providers might be leary too – first in terms of having no throat to choke (no commercialization partner), but also out of concerns of having a me too service.  Do you really want to compete with Rackspace with their own code? Smart people can still provide differentiation, but there may be a natural aversion to basing your cloud on one of your main competitor’s kit.

So, for now this is very exciting news that could change the face of the industry.  I’m here at HostingCon and will try to get a reaction from folks to see how they feel about it.  Some mass market hosters like Peer1 and SoftLayer have already joined the OpenStack bandwagon, but until they’ve actually deployed it (2011 ??), it’s just moral support.

IT Chargeback Planning – A Critical Success Factor for Enterprise Cloud

“If you don’t know your destination, any road will do.”

That little nugget from one of my colleagues concisely sums up the theme of this brief post.  After having read a recent analyst note on IT chargeback, and knowing about some of the work going on in various IT organizations in this area, I was originally going to write a detailed post about some of the most interesting aspects of this domain.  While I was thinking, the folks at TechTarget were doing, resulting in a nice article on this topic that I encourage you to read if you want to understand more about IT chargeback concepts.

As companies invest more and more in cloud computing, one of the areas that seems to be generally overlooked is the central role of IT chargeback. After all, one of the key benefits of cloud is metering – knowing when you are using resources, at what level, and for how long.  For the first time, it is not a lot more feasible to directly allocate or allot costs for IT back to the business units that are consuming it. 

One of the reasons business units are now going around IT and using the cloud is this transparency of costs and benefits.  In most enterprises, the allocation of IT expenses can be very convoluted, resulting in mistrust and confusion about how and why charges are taken.  If I go to Amazon, however, I know exactly what I’m paying for and why.  I can also track that back to some business value or benefit of the usage of Amazon’s services. Now businesses are asking for the same “IT as a Service” approach from their IT organizations.  Anecdotally, internal customers appear willing to pay more than the public cloud price in order to get the security and manageability of an internal cloud service – at least for now. 

While many IT organizations are rushing to put up any kind of internal cloud, they are often ignoring this important aspect of their program.  Negotiating in advance with your business customers on how you’re going to charge for cloud services, and why, is a good first step.  Building the interfaces between the cloud an internal accounting systems can be pretty difficult.  It’s important to take a flexible approach to this, given that chargeback models can change quickly basesd on business conditions. 

Publishing a service catalog with pricing can make it a lot easier for internal customers to evaluate, track, and audit their internal cloud expenses.  Accurate usage information, pre-defined billing “dispute” processes, and – above all – high levels of transparency regarding internal costs to provide your services are all critical to user acceptance.  If possible, put your cloud IT chargeback plans in place before you build your cloud.  Your negotiations with business units might prompt you to make changes to your services – such as different storage solutions or networking topologies to lower costs or improve SLAs.  Making these changes at the start of a cloud project can be far less expensive than making them retroactively.

Bottom line – getting IT chargeback right is key to a successful cloud program.

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

Open Source Cloud Bits

Last week I got into a nice discussion on Twitter regarding the role of open source in an infrastructure as a service (IaaS) stack.  With open source cloud stacks from Eucalyptus, Cloud.com, Abiquo and others competing against proprietary source solutions from Enomaly, VMware and others, this can get fairly confusing quickly.

For clarity, here is my position on open source vs. proprietary source in this aspect of the market:  both have a role to play and natively one is not better or more advantaged than the other.  However, when you get into the details there are factors that might favor one model over the other in specific cases. I will look at this from the perspective of the service providers and enterprises who use cloud stacks.  In a future post I may touch on factors that vendors should when choosing between open source and closed source models.

For service providers, margins are critical.  Any increase in capital and operating costs must enable a corresponding increase in value provided in the market.  Amazon and Google have the scale and ability to build a lot of capabilities from scratch, trading a short-term increase in R&D against a long-term decrease in operating costs.

While some cloud providers may attempt to match the low-cost giants on pricing, they know that they need to differentiate in some other material way (e.g. performance, customer service, etc.).   For these providers, the more “free open source” technology that they can leverage, the lower their operating costs may be.

This low-cost focus must permeate their decision making, from the physical infrastructure (commodity servers, JBOD/DAS storage, etc.) to the hypervisor (Xen or KVM vs. VMware), to the cloud provisioning/automation layer, and more.  Open source CMDBs (example), monitoring technologies (e.g. Nagios) and other technologies are often found in these environments.

There are trade-offs, of course.  Open source can often be more difficult to use, lack key functionality, or suffer from poor support – all of which increases costs in often material and unintended ways (note that proprietary solutions can have many of the same issues, and do more often than most people realize).

Other service providers may target the enterprise and focus on highly-differentiated offerings (though I really haven’t see much differentiation yet, at least at the IaaS level).  For these providers, the benefits of enterprise-grade storage (EMC, NetApp, HP), VMware’s HA and fault-tolerant capabilities, and other capabilities gained from using tools from HP, IBM, BMC and other vendors, may be well worth the increase in cost.  And make no mistake, the cost increase from using these technologies can be quite substantial.

Newer vendors, such as Enomaly, are having some success despite their closed-source nature (Enomaly started as open source but changed models in 2009).  Further, even when a provider uses a solution from Cloud.com or Abiquo, both of them with open source models, they will often choose to pay for premium editions in order to get functionality or support not available via open source.  In reality, anybody serious about this market will want a mix of open-source (though not necessarily free) and closed-source technologies in their environment.

In the enterprise, the story is a bit different.  If you’re already paying VMware for an all-you-can-eat enterprise license agreement (ELA), the marginal cost to use vSphere in your private cloud is zero.  KVM or Xen are not less expensive in this case.  Same is true for tools from HP, IBM, BMC and others.

The primary question, then, is whether or not these are the right solutions.  Does BMC have a better answer for private clouds than Eucalyptus?  Is IBM CloudBurst better than Abiquo for development and test?  

Open source for open source’s sake is not rational.

In addition, focusing on only the economics of open source misses what might be the bigger value – risk reduction.  Closed-source projects can go under – either because the developer goes out of business, or if an acquirer decides to no longer keep a product on the market.  This does happen all of the time.  For large and well-established technologies, the risk of abandonment is generally lower.  VMware, HP and EMC are not going anywhere soon.

Open source projects, in contrast, can always be continued.  The cost may fall to those dependent on the project, but at least you get the option.  Not so with closed source – especially if the solution is killed by its owner.

Most buyers can get source code escrow terms that give them access to the source for a product in the event of bankruptcy or similar situations.  In 20 years I have not seen a source escrow addendum include a trigger to release the code if the developer stops or slows investing in it.  Today your vendor might have 20 top-tier developers delivering on a roadmap.  What if in 3 years they have only 4 folks maintaining the current code line and making minor updates?  Can I get the source code then?  Typically not.

There’s another issue that often gets overlooked.  Even if you have a source escrow agreement, that doesn’t mean that the code deposits are being made on a regular basis.  It also doesn’t mean that the code is well-commented or that accurate build scripts are included such that a person of “commercially reasonable” skill can take over the code and move it forward.  I have seen this situation happen more than once, including recently, and it’s quite a shock to learn that your vaunted supplier has been careless, lazy, or even deliberately misleading about their source code responsibilities.

CloudBzz Recommendations

1.  Insist on open source (or at least full source access – not escrow) when one or more of the following situations exist:

– the supplier is small or thinly funded (VCs can and do pull the plug even after many million$ have been invested)
– the capability/functionality provided by the technology is strategically important to you, especially when investment must be maintained to remain leading-edge in a fast-moving and intensely competitive market
– migration costs to a different technology are very high and disruptive

2.  Consider closed-source/proprietary solutions when at least two or more of the following factors are present:

– the functionality provided by the software is not core to your competitive positioning the market
– replacement costs (particularly internal change costs) are moderate or low
– the functionality and value is so much higher than open source alternatives that you’re willing to take the risk
– the technology is so widely deployed and successful that the risks of abandonment is very low
– the costs are low enough so as not to make your offering uncompetitive or internal environment unaffordable

Balancing risk, capability and control is very difficult – even more so in a young and emerging market like cloud computing.  The decisions made in haste today can have a profound impact on your success in the future – especially if you are a cloud service provider.

While open source can be a very potent source of competitive advantage, it should not be adopted purely on philosophical grounds.  If you do adopt closed source, especially at the core stack level, work hard to aggressively manage your exposure and make sure you work hard to ensure that those “unforeseen events” don’t leave you high and dry.

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

%d bloggers like this: