Monthly Archives: March 2010

VMware Should Run a Cloud or Stop Charging for the Hypervisor (or both)

I had a number of conversations this past week at CloudConnect in Santa Clara regarding the relative offerings of Microsoft and VMware in the cloud market.  Microsoft is going the vertically integrated route by offering their own Windows Azure cloud with a variety of interesting and innovated features.  VMware, in contrast, is focused on building out their vCloud network of service providers that would use VMware virtualization in their clouds. VMware wants to get by with a little help from their friends.

The problem is that few service providers are really VMware’s friend in the long run.  Sure, some enterprise-oriented providers will provide VMware capabilities to their customers, but it is highly likely that they will quickly offer support for other hypervisors (Xen, Hyper-V, KVM).  The primary reason for this is cost.  VMware charges too much for the hypervisor, making it hard to be price-competitive vs. non-VMware clouds.  You might expect to see service providers move to a tiered pricing model where the incremental cost for VMware might be passed onto the end-customers, which will incentivize migration to the cheaper solutions.  If they want to continue this channel approach but stop enterprises from migrating their apps to Xen, perhaps VMware needs to give away the hypervisor – or at least drop the price to a level that it is easy to absorb and still maintain profitability ($1/month per VM – billed by the hour at $0.0014 per hour plus some modest annual support fee would be ideal).

Think about it… If every enterprise-oriented cloud provider lost their incentive to go to Xen, VMware would win.  Being the default hypervisor for all of these clouds would provide even more incentive for enterprise customers to continue to adopt VMware for internal deployments  (which is where VMware makes all of their money).  Further, if they offered something truly differentiated (no, not vMotion or DRS), then they could charge a premium.

If VMware does not make this change, I believe that they can kiss their position in the cloud goodbye in the next 2 years or so.  Their alternative at that point is to offer their own cloud service to capture the value from their enterprise relationships and dominant position.  They can copy the vertically integrated strategy of Microsoft to make push-button deployment to their cloud service from both Spring and vCenter.  This has some nice advantages to them culturally as well.  VMware has a reasonably large enterprise sales force (especially when combined with EMC’s…), and these high-paid guns are unlikely to get any compensation when a customer migrates to Terremark.  There’s a separate provider sales force that does get paid.  If VMware created their own managed service and compensated their direct reps to sell it, adoption would soar.  With their position in the developer community via the Spring acquisition, they’ll get some easy low-hanging fruit as well. 

Now, put these concepts together – free hypervisor and managed offering.  Would they lose their services providers?  I doubt it.  Enterprises want choices while continuing to use what they already know.  Terremark, Savvis, and others will have good marketing success with VMware as long as it doesn’t break their financial model.  Further, VMware’s “rising tide” would actually float all of the other VMware-based service providers and help them to better position against and compete with the Xen-based mass-market clouds.  A “VMware Inside” campaign that actually promoted other service providers would also help. 

Being in the managed services space is a very different business for VMware.  The margins are lower, but they could build a very large and profitable cloud offering with their position in the enterprise.  Similarly, a unified communications service based on Zimbra would give them even more value to sell (and to offer through vCloud partners).  As long as they remove the financial incentive for providers to switch to Xen at the same time, they could have a very strong play in this space.

If VMware does not at least make the pricing change for service providers, their future in the cloud is very much at risk. 

p.s. While they’re at it, VMware needs to allow us to integrate directly with ESX and get rid of vCenter in a service provider environment.

Tagged

Protecting Yourself from Cloud Provider & Vendor Roulette

David Linthicumwrote a piece today in InfoWorld regarding the coming wave of cloud vendor consolidation.  After CA’s acquisition of 3Tera, it’s natural to ask how you can protect yourself from having your strategic vendor acquired by a larger, less focused entity.  Face it, the people building these startups are mostly hoping to have the kind of success that 3Tera had — a reported $100m payday.  A lot of people are concerned with what CA will do with AppLogic – their general history with young technologies is not particularly promising.  If you are building a cloud, AppLogic is the heart of your system.  If CA screws it up (and I’m not saying they will, but if they do…), you’re pretty much hosed.

As I wrote in November, we will start seeing both consolidation and market exits in the cloud provider in the not too distant future.  So, whether you are building a cloud, or using someone else’s cloud, you need to have a plan to mitigate the all-too-real risks of your vendor going away or having a change of control event (e.g. being acquired) that results in a degraded capability.

If  you’re building a cloud (private or public), the primary way you can protect yourself is by selecting a vendor with an open source model.  If the commercial entity fails, you can still count on the community to move the product forward – or you can step in and become the commercial sponsor.  If the vendor gets acquired and the new owner takes the project in a direction you don’t want, you can “fork” the project (see Drizzle and MariaDB as forks of the MySQL project owned by Oracle as a result of the Sun acquisition).  Or, you can start with a community-sponsored project like OpenNebula that has a very open license (Apache).  It is highly unlikely that OpenNebula will go away anytime soon, and due to the licensing model there is no chance the a vendor will get deep control of the project.

AppLogic, in contrast, is not available in an open source model.  If you’re a 3Tera customer, you’re probably very nervous right now.  I’m sure that 3Tera and CA execs are making calls and customer visits to calm their customers now – but if they converted AppLogic to an open source model it would immediately give current and prospective customers a lot of comfort.  If you’re a prospect, you’re likely holding off until you know how CA will support AppLogic going forward. 

If you are making a cloud service provider decision, the challenge is more difficult.  Here you need to contrast the type of cloud (IaaS, PaaS, and SaaS) and a lot more factors come into play.  Cross-cloud application migration and federated cloud models are coming, but they are immature at best and only deal with a small subset of cloud deployment topologies.  Perhaps I’ll do a deeper analysis of this later.

Bottom line, you should think twice (or thrice) about basing your cloud solution on a technology based on a proprietary commercial license (sorry Reuven).  Vendor size matters less than you think.  It could be more likely that a large company might kill an unsuccessful application more quickly than the founders of the startup would.  The primary way to protect yourself is to stick with open source as much as you can.  It’s also typically less expensive to use open source.

Amazon Adds Consistency to SimpleDB

Last week Amazon announced the addition of full database consistency as an option for SimpleDB users.  Most of you know that SimpleDB is a “NoSQL” database that allows you to build very scalable Web apps without the typical scaling limitations of SQL databases.  One of the limitations of SimpleDB has been the reliance on “eventual consistency” at a transaction level (see Amazon CTO Werner Vogels post on Eventually Consistent data for more details – it’s a good read.  See his post re the update here). 

In short, “eventually consistent” means that an update may not be reflected in the next read of that “object” and but will eventually get there.  Consistency is the “C” in ACID (Atomicity, Consistency, Isolation and Durability) properties that define a proper transactional database.  In shared data systems there is the CAP Theorem that states that (as Werner shares) “of the three properties of shared-data systems–data consistency, system availability, and tolerance to network partitions–only two can be achieved at any given time.” 

For most of today’s distributed Web systems, the primary trade-off for achieving consistency is performance.  By ensuring that writes are fully propagated across your system before allowing reads, your performance will be affected (and possibly availability in some rare instances).  Okay, so we know that very large systems may make this trade off, but Amazon has in the past made eventual consistency the only model.  If you wanted to enforce consistency, it used to be that you needed to use a different database solution.  Not any more.

Amazon adds two features to SimpleDB for this issue.  The first is Consistent Reads – where you can ensure that your data is fully up to date so no queries will return stale data.  Here is a nice chart from Werner’s post on comparing the old (eventually consistent) model and the new option for a consistent read.

Eventually consistent read Consistent read
Stale reads possible
Lowest read latency
Highest read throughput
No stale reads
Higher read latency
Lower read throughput

 

Conditional Put and Delete is a bit more complicated.

“Conditional Puts allow inserting or replacing values of one or more attributes for a given item if the expected consistent value of a single-valued attribute matches the specified expected value. Conditional puts are a mechanism to eliminate lost updates caused by concurrent writers writing to the same item as long as all concurrent writers use conditional updates.”

My assumption here is that you can read an attribute before doing an update, then use the read value as a condition before your update can be accepted.  That way if another process jumps in before you with an update, you don’t overwrite them.  You have to manage the event handling code to determine what you do next, but you get more control. 

“Conditional deletes allow deleting an item, an attribute or an attribute’s value for a given item if the expected consistent value of a single-valued attribute of an item matches the specified expected value. If the current value does not match the expected value, or if the attribute is gone altogether, the delete is rejected.”

The use case would be similar to the put above.  Again, you have a lot more control to avoid stomping on another delete or update that just happened…

Taken together, Amazon SimpleDB is now a more robust solution for managing databases for Web-scale applications.  You can choose to favor performance and availability, or you can impose consistency through these two features.  It’s a good update to a service that has seemed to lag in usage vs. other Amazon tools.

Skytap Goes Deep in Networks

Skytap is known as a cloud dev/test provider today, but they have been seeing more workloads coming on-board including ERP migration, training, demos, etc.  So perhaps they are not as targeted as we think.  This can be a risk, where customers start to wonder what you stand for.  Skytap entered in the dev/test market, and they did not have a grand plan to expand from this.  It’s being driven by customers. So,

Today they are announcing “multi-network” capabilities to enable multi-level network topologies to be as flexible as your on-premise networks.  Only it’s a lot easier – you can deploy this in a browser.  It even allows you to save configurations and check them in and out of the Skytop repository.  This is their “virtual private cloud” capability.  This goes significantly beyond the current Amazon VPC solution with much more flexibility and configurability.  It’s a lot less work to set this up than with Amazon VPC, which basically assumes that a developer is part of the network team. 

Skytap

 

Skytap is basically claiming the ability to enable the kinds of networks shown above.  This is a nice differentiator and makes it easier for enterprises to move multiple complex workloads, like SAP and other multi-tier applications, to a cloud.

%d bloggers like this: