Category Archives: Vendors

Open Call to VMware – Commercialize Cloud Foundry Software!

After spending time at VMware and Cloud Expo last week, I believe that VMware’s lack of full backing for Cloud Foundry software is holding back the entire PaaS market in the enterprise.

Don’t get me wrong, there’s a lot of momentum in PaaS despite how very immature the market is. But this momentum is in pockets and largely outside of the core of software development in the enterprise. CloudFoundry.com might be moving along, but most enterprises don’t want to run the bulk of their applications in a public cloud. Only through the Cloud Foundry software layer will enterprises really be able to invest. And invest they will.

PaaS-based applications running in the enterprise data center are going to replace (or envelope) traditional app server-based approaches. It is just a matter of time due to productivity and support for cloud models. Cloud Foundry has the opportunity to be one of the winners but it won’t happen if VMware fails to put their weight behind it.

Some nice projects like Stackato from ActiveState are springing up around cfoundry, but the enterprises I deal with every day (big banks, insurance companies, manufacturers) will be far more likely to commit to PaaS if a vendor like VMware gets fully behind the software layer. Providing an open source software support model is fine and perhaps a good way to start. However, this is going to be a lot more interesting if VMW provides a fully commercialized offering with all of the R&D enhancements, etc.

This market is going to be huge – as big or bigger than the traditional web app server space. It’s just a matter of time. Cloud Foundry is dominating the current discussion about PaaS software but lacks the full support of VMware (commercial support, full productization). This is just holding people back from investing.  VMware reps ought to be including Cloud Foundry in every ELA, every sales discussion, etc. and they need to have some way to get paid a commission if that is to happen. That means they need something to sell.

VMware’s dev teams are still focused on making Cloud Foundry more robust and scalable. Stop! It’s far better to release something that’s “good enough” than to keep perfecting and scaling it.
“The perfect is the enemy of the good.” – Voltaire

It’s time for VMware to get with the program and recognize what you they and how it can be a huge profit engine going forward – but they need to go all in starting now!

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , ,

Google Compute Engine – Not AWS Killer (yet)

GCE Logo

(c) Google

Google launched their new “Google Compute Engine” yesterday at I/O. Here’s more info about GCE on the Google Developer’s Blog, and a nice analysis by Ben Kepes on CloudAve.  If “imitation is the sincerest form of flattery” then it’s clear the guys at Google hold the Amazon Web Services EC2 in very high regard. In many ways, GCE is a really good copy of EC2 circa 2007/2008. There are some differences – like really great encryption for data at rest and in motion – but essentially GCE is a copy of EC2 4-5 years ago.

GCE is missing a lot of what larger enterprises will need – monitoring, security certifications, integration with IAM systems, SLAs, etc. GCI also lacks some of the things that really got people excited about EC2 early on – like an AMI community, even the AMI model so I can create one from my own server image.

One of the key selling points that people are jumping on is pricing. Google claims 50% lower pricing, but that doesn’t hold against reserved instances at Amazon which are actually lower over time than GCE.  And price is rarely the primary factor in enterprise buying anyway. Plus, you have to assume that Amazon is readying a pricing response so whatever perceived advantage Google might have there will quickly evaporate

Other missing features that AWS provides today:

  • PaaS components – Relational Database Service (MySQL, SQL Server and Oracle), Elastic Map Reduce, CloudFront CDN, ElastiCache, Simple Queue Service, Simple Notification Service, Simple Email Service, Simple Email Service
  • Direct Connect – ability to run a dedicated network segment into AWS back to your data center
  • Virtual Private Cloud – secure instances that are not visible to public internet
  • Deployment tools – IAM, CloudWatch, Elastic Beanstalk, CloudFormation
  • Data Migration – AWS Import/Export via portable storage devices (e.g. sneaker net) for very large data sets
  • and others

Bottom line is that GCE is no AWS Killer. Further, I don’t think it ever will be. Even more – I don’t think that should be Google’s goal.

What Google needs to consider is how to create the 10x differentiation bar that any new startup must have.  Google Search was that much better than everybody else when it launched.  GMail crushed Yahoo Mail with free storage, conversation threads and amazingly fast responsiveness. Google Maps had AJAX, which blew away MapQuest and the others. And so on. You can’t just be a bit better and win in this market. You need to CRUSH the competition – and that ain’t happening in this case.

What would GCE have to offer to CRUSH AWS?

  • Free for production workloads up to a fairly robust level (like free GBs of GMail vs. Yahoo’s puny MBs, the ability to run most small apps for no cost at all would be highly disruptive to Amazon)?
  • A vastly superior PaaS layer (PaaS is the future – If I were rewriting The Graduate… “just one word – PaaS”)?
  • A ginormous data gravity well – think of if Google built a data store of every bit of real-time market data, trade executions, corporate actions, etc – they’d disrupt Bloomberg and Thomson Reuters too!  Or other data – what is the data that they can own (like GIS, but more broadly interesting) that can drive this
  • Enterprise SaaS suite tied to GCE for apps and extensions – what if Google bought SugarCRM, Taleo, ServiceNow and a dozen other SaaS providers (or built their own Google-ized versions of these solutions), disrupted the market (hello, Salesforce-like CRM but only free), and then had a great compute story?
  • A ton of pre-built app components (whatever they might be) available in a service layer with APIs?

No matter what the eventual answer needs to be, it’s not what I see on the GCE pages today. Sure, GCE is mildly interesting and it’s great that Google is validating the last 6 years of AWS with their mimicry, but if there’s ever going to be an AWS killer out there – this ain’t it.

Tagged , , , ,

Open Clouds at Red Hat

Red Hat has ben making steady progress toward what is shaping up as a fairly interesting cloud strategy.  Building on their Deltacloud API abstraction layer and their CloudForms IaaS software, a hybrid cloud model is starting to emerge. Add to this their OpenShift PaaS system, and you can see that Red Hat is assembling a lot of key components. Let’s add the fact that Red Hat has gotten very involved with OpenStack, providing an interesting dynamic with CloudForms.

Red Hat is the enterprise king in Linux (RHEL), strong in application servers (JBoss), and has a lot of very large customers.  Their VM environment, RHEV (aka KVM) won’t displace VMware in the enterprise space any time soon, but it is pretty interesting in the service provider space.

Red Hat’s community open source model will be very appealing to the market.  In fact, any of the OpenStack distro providers should be at least a bit worried that Red Hat might leapfrog them.  With their OpenStack move, CloudForms is being repositioned as a hybrid cloud management tool.  Now their competition in the future might be more along the lines of RightScale and enStratus.  What I’ve seen so far of CloudForms shows a lot of promise, though it’s still pretty immature.

Red Hat is pushing a message about “open clouds” – which is less about open source than it is about avoiding vendor lock in with cloud providers.  That’s something that CloudForms is intending to address.  It’s also why OpenShift has been released as an open source project (Apache 2.0 – yay) that can be deployed on other clouds and non-cloud infrastructures.

The big opportunity, IMO, is for Red Hat to go very strong on the OpenStack path for IaaS (e.g. release and support an enhanced Red Hat distro), really push their OpenShift conversation vs. Cloud Foundry based on their ability to drive community (along with it’s deep integration with JBoss), and move CloudForms further up the stack to a governance and multi-cloud management framework (their messaging on this is not very strong).  It’s this model of openness – any cloud, any app, that will make their “Open Cloud” vision a reality.

Tagged , ,

Cloud Stack Red Ocean Update – More Froth, but More Clarity Too

The cloud stack market continues to go through waves and gyrations, but increasingly now the future is becoming more clear.  As I have been writing about for a while, the number of competitors in the market for “cloud stacks” is totally unsustainable.  There are really only four “camps” now in the cloud stack business that matter.

The graphic below shows only some of the more than 40 cloud stacks I know about (and there are many I surely am not aware of).

VMware is really on its own.  Not only do they ship the hypervisor used by the vast majority of enterprises, but with vCloud Director and all of their tools, they are really encroaching on the traditional data center/systems management tools vendors.  They have great technology, a huge lead in many ways, and will be a force to reckon with for many years.  Many customers I talk with, however, are very uncomfortable with the lack of openness in the VMware stack, the lack of support for non-virtualized environments (or any other hypervisor), and a very rational fear of being monopolized by this machine.

Data Center Tools from the big systems management vendors have all been extended with cloud capability at use in both private and public clouds.  Late to the party, they are investing heavily and have shown fairly significant innovation in recent releases.  Given that the future of the data center is a cloud, this market is both a huge opportunity and an existential threat.  Deep hooks into the data center with service desks, service catalogs, automation and orchestration capabilities provide a near term protection.  There are just too many trained resources with too much invested for most IT organizations to just walk away.

Unlike the VMware approach, all of these vendors support a more heterogeneous environment – especially CA and BMC.  Most support some combination of Xen, KVM and Hyper-V in addition to VMware hypervisors.  They are also moving up-stack, supporting integration with public clouds such as Amazon and others, application-level functionality, and more.

OpenStack is the new 800-lb gorilla.  In less than 18 months OpenStack has emerged as the most vibrant, innovative and fast-moving segment of this market.  Evidence of progress includes contributed code from over 1,000 developers, more than 128 companies in the community, a growing list of commercial distributions from  incredibly smart teams, and a maturing technology base that is starting to gain traction in the enterprise. It’s still very early days for OpenStack, but it very much feels like the counterweight to VMware’s controlling influence.

The froth in this market is coming from increasing number of very cool (and occasionally well-funded) OpenStack commercialization efforts.  As with most markets, there will be winners and losers and some of these efforts will not make it.  This market is so new that whatever shakeout may occur, it won’t happen for a few years.

Other solutions are going to find the going tougher and tougher.  Some may be doing well and growing today, but ultimately the market will shake out as it always does and many of these current solutions will either find new use-cases and missions, or they will be shuttered. I have unconfirmed reports of at least two of the currently available stacks on my list being withdrawn from the market for lack of sales.  Is this the start of a “great cloud stack shakeout?”

Where are we heading?

The majority of the market in 3 years will have coalesced into three big buckets, and it’s starting to happen now.  vCloud, OpenStack and the big data center vendors will rule the roost at the core stack level going forward.  The graphic below is not intended to show the size of these markets.

The guys in the “other” category reading this post are probably not ready to hear the bad news, but this is what I believe to be the ultimate state. There will be niche survivors, some who will migrate to the OpenStack island (rumors abound), and others who may pivot to new markets or solution designs.  Some are just focusing on Asia, especially China, since it’s more of a wild west scenario and just showing up is guaranteed to generate deals.  However, many of them will have gone out of business by 2015 or be barely scraping by. Such is the nature of new markets.

One key distinction with the “big four” data center/systems management tools vendors is that they are not going to be the same kind of open and vibrant ecosystems as OpenStack or vCloud.  With their huge sales organizations and account presence, they don’t necessarily need the leverage that an ecosystem provides. Some in the #clouderati community might conclude that they are toast.  I’ve heard several say that there will be only two choices in the coming years, but I disagree and do think that the DC tools guys get it now and have a lot of money to invest.

I have this opinion based on spending most of my days working with large enterprises and governments who have millions invested in these vendors, and I expect a fair bit of enterprise cloud infrastructure – especially for their more mission-critical applications – to be real long-term opportunities for the big guys.  vCloud and OpenStack will certainly hurt them in their core markets, however, and there will be lots of pivots and new initiatives from these mega vendors to ensure their relevancy for a long time to come.

Bottom line?

The market is starting to form up, and it looks like there will be three big segments going forward (and a small market of “other”). If you’re not in one of them, and solidly so, you’re doing something else in a few years. There just won’t be enough revenue to support 40+ profitable and viable vendors.  How many will survive? That’s a tough question, but here’s my prediction for the market breakdown in 2018.

VMware:  1

OpenStack commercial distributions:  4 viable, 1 or 2 that are clear leaders

DC Tools:  4 main and a couple smaller guys

Other: at most 3, mainly in niche markets

Total:  12 viable cloud stack businesses in 2018

What do you think?

 

 

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

 

 

The Red Ocean of Cloud Infrastructure Stacks (updated)

Update: am revising this still… Reposting now – but send me your comments via @CloudBzz on Twitter if you have them.

It seems like every day there’s a new company touting their infrastructure stack.   I’m sure I’m missing some, but I show more than 30 solutions for building clouds below, and I am sure that more are on their way.  The market certainly can’t support so many participants!  Not for very long anyway.  This is the definition of a “red ocean” situation — lots of noise, and lots of blood in the water.

This is the list of the stacks that I am aware of:

I. Dedicated Commercial Cloud Stacks

II.  Open Source Cloud Stacks

III.  IT Automation Tools with Cloud Functionality

IV.  Private Cloud Appliances

I hope you’ll pardon my dubious take, but I can’t possibly understand how most of these will survive.  Sure, some will because they are big and others because they are great leaps forward in technology (though I see only a bit of that now).  There are three primary markets for stacks:  enterprise private clouds, provider public clouds, and public sector clouds.  In five years there will probably be at most 5 or 6 companies that matter in the cloud IaaS stack space, and the rest will have gone away or taken different routes to survive and (hopefully) thrive.

If you’re one of the new stack providers – think long and hard about this situation before you make your splash.  Sometimes the best strategy is to pick another fight.  If you swim in this red ocean, you might end up as shark bait.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

CloudFloor Drives the Cloud To Achieve Business Results

cloudfloor logo

CloudFloor (Waltham, MA) is getting close to starting the beta program for CloudControl, their system to tie cloud usage to measurable business metrics.  I had an interesting call with co-founder and CTO Imad Mouline last week to learn more about this innovative system.  There are a couple of ways to approach the concept of CloudFloor.  The most obvious one deals with controlling costs by shutting down instances when they are no longer needed, but it’s also the least interesting approach.  And there are companies such as CloudCrusier addressing the cost management and cloud chargeback business.

The CloudFloor guys started seeing big uptake in cloud usage a while ago and were able to glean some pretty interesting insights from their performance data.  Insights such as the “noisy neighbor” problem in a multi-tenant environment (it’s real), seeing users deploy lots of VMs but not shut them down when no longer needed, etc.  They saw a lot of large enterprises overspending on cloud but also getting application performance blocked by simple and easily remedied mistakes.  CloudFloor was formed to address these issues and beyond.

What struck me as most interesting was the sophistication of how they tie non-cost business metrics into the equation.  Think about any business and the key metrics that drive their success.  As Imad pointed out, companies can track many metrics today but very few are core and critical to their business.  For example, at an auction site like eBay they know that the two most important metrics are number of listings and number of bids at any given point in time.

If you’re in a primarily online business, metrics are heavily influenced by the amount of infrastructure you have deployed at any given time.  Too much and you’re losing money.  Too little and you’re losing money… Like Goldilocks and the Three Bears, the trick is to get it “just right.”

One of my previous startups was in the digital imaging space.  The number of images uploaded at any given point directly correlated with print and gift orders. Having sufficient infrastructure to handle the upload loads at any given time was critical.  Having too much was wasteful – and since we started this pre-cloud we were over-provisioned a majority of the time.  However, at the very biggest peak times we sometimes were under-provisioned.  This caused uploads to slow or fail which in turn resulted in lost revenues.

Had I had a reason to do so (ie had I been using cloud), it would have been pretty easy for me to create a formula that calculated the marginal cost of additional infrastructure vs. the marginal gross profit that would be enabled by provisioning instances.  Given that formula, I could then maximize my profit by having a system that very intelligently managed the balance to a point where – in theory – an extra $1.00 spent on cloud would result in at least an extra $1.00 in gross profit (all other costs being equal).  Beyond that, I’d see diminishing returns.  Of course, it would never get exactly that precise, but it could be close.

Of course, you can also have metrics that may not so easily tie to micro economics.  If you’ve promised a certain SLA level for transactions (e.g. cart page load in 1.5 seconds, purchase-to-confirmation page of 4 seconds, etc. – CloudControl can optimize the amount of cloud infrastructure you have deployed to meet the SLAs.  This is what BSM – Business Service Management – is all about.

They also can do things like manage geographic load balancing, traffic shaping and more.  There is a pretty sophisticated vision at play here.

So, how does it work?

Their core “Principles Engine” (“PE”) accepts data from a number of different feeds – could be Google Analytics, data generated from the application, or other information.  PE then turns that data into visibility and insights.  If that’s all you need, you’re golden — CloudControl is free for the visibility bits (you pay if you want their automation to control cloud resources).  See the graphic below.

Click to Enlarge

Then you provide your goals and principles for CloudControl to manage.  CloudControl then manages global traffic, cloud instances and more (can call out to any service).  All of this goes towards hitting the business metrics established in the Principles Engine.

One of the things they realized early on is that an holistic approach to cloud BSM would have to go broader than the capabilities of individual clouds. Geographic load balancing, failover and other Internet-level traffic-shaping techniques are absolutely critical to hitting the metrics in many cases.  This might also include managing across different vendor clouds and even internal clouds (very complicated DNS management required).

What they needed, then, is a platform on which to manage these capabilities, so they went out and acquired a small but growing DNS provider (Microtech Ltd from the UK) and are now in the DNS management business too.  DNS is important to performance, security and availability – which is why CloudFlare is able to do what it does (protects and speeds up web sites).  They still sell the DNS services standalone, but the strategic rationale for the acquisition was the breadth of the vision for business service management.  This was a really smart play and will set them apart from many potential competitors.

CloudFloor has taken a very sophisticated approach to tie cloud usage, costs and capabilities to the business metrics you care about most.  They are going to beta soon and it should be very interesting to see where they take the platform.

———–

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/cloudfloor-drives-the-cloud-to-achieve-business-results/. You can follow CloudBzz on Twitter @CloudBzz.

 

Tagged , ,

Dell (and HP) Join OpenStack Parade to the Enterprise…

(and HP)

 

Update:  HP also announced support for OpenStack on its corporate blog.  And the beat goes on…

 

The OpenStack Parade is getting bigger and bigger. As predicted, enterprise vendors are starting to announce efforts to make OpenStack “Enterprise Ready.”  Today Dell announced their support for OpenStack through their launch of the “Dell OpenStack Cloud Solution.”  This is a bundle of hardware, OpenStack, a Dell-created OpenStack installer (“Crowbar”), and services from Dell and Rackspace Cloud Builders.

Dell joins Citrix as a “big” vendor supporting OpenStack to their customers.  Startups such as Piston are also targeting the OpenStack space, with a focus on the enterprise.

Just one year old, the OpenStack movement is a real long-term competitor to VMware’s hegemoy in the cloud. I fully expect to see IBM, HP and other vendors jumping into the OpenStack Parade in the not too distant future.

Forward PaaS: VMware’s Cloud Foundry First Down

I know it’s baseball season, but there’s no passing in baseball and this post will just work better as a football analogy.

VMware’s announcement this week of Cloud Foundry (twitter @cloudfoundry) has gotten a lot of attention from the cloud community, and for good reason. Just as hardware is a low-margin commodity business, hardware as a service (e.g. IaaS) is the same. Ultimately, price will be the core basis for competition in the IaaS space and a lot of high-cost “enterprise” clouds will struggle to compete for business without some real differentiation.

For the past few years, PaaS offerings from Salesforce (force.com)a, Microsoft (Azure), Google (AppEngine) and newcomers like Heroku (now owned by Salesforce), EngineYard and others have really gained a lot of traction. Developers really don’t like sysadmin work as a rule, and provisioning instances of EC2 is sysadmin work. Writing code that turns into applications, features, etc. that end-users use is far more interesting to the developers I’ve worked with (and who’ve worked for me). PaaS, then, is for developers.

But PaaS before this week meant lock-in. Developers, and the people who pay them, don’t like to be locked into specific vendor solutions. If you write for Azure, the fear (warranted or not) is that you can only run on Azure. Given that Microsoft has totally fumbled the opportunity to make Azure a partner-centric platform play, that means you need to run your Azure apps on Microsoft’s cloud. Force.com is even worse – with it’s own language, data model, etc. there’s not even the chance that you can run your code elsewhere without major rework. Force.com got traction primarily for people building extensions to Salesforce’s SFA and CRM offerings – though some people did do more with it. VMforce (Spring on Force.com) was supposed to change the openness issue by providing a framework for any Java apps to run. Google AppEngine is also proprietary in many respects, and when it launched with just a single language (Python!), a lot of developers shrugged. Even the proprietary PaaS components of AWS have been a problem. I could not get my developers to use SimpleDB back in 2008 because, as the rightly pointed out, we’d be stuck if we wanted to move off of EC2 at some point.

Lots of flags on the field. Holding! Illegal receiver! Neutral zone infraction!

There have been some attempts to publish PaaS frameworks that can run on other clouds, but they have failed to gain much traction. (carried off the field on a stretcher? yeah, that works).

Along comes CloudFoundry by VMware and — INTERCEPTION!

In fact, it’s like a whole new game just started. On their first possession VMware completed a perfectly executed forward PaaS. It’s 1st & 10 on their own 20 yard line. There’s a lot of field out there, and while the defense is in total disarray for the moment, it’s going to take a lot of perfect execution to score a CloudFoundry touchdown.

The Cloud Foundry Playbook

VMware really nailed it on the launch, with very compelling playbook of offensive and defensive plays that should have most PaaS competitors reeling. Here’s their graphic that shows the core concepts:

Shotgun Formation: Across the top you can see three programming frameworks included at launch.  Spring (Java – SpringSource owned by VMware), Rails, and node.js.  You can expect more frameworks to be supported – including Python and PHP.  Ideally they would add .NET too, though not sure if the licensing can work there (a huge chunk of corporate apps are Windows/.NET based).  They also added support for MongoDB, MySQL and Redis for data management.

The Open Blitz: VMware did an incredibly good thing by launching the core Cloud Foundry project as an Apache-licensed open source project.  While I have some concerns around their lack of a community governance model, the fact that they went with Apache vs. a dual-license GPL/Commercial model like MySQL is incredibly aggressive.  I could, if I wanted to, grab Cloud Foundry code, create my own version (e.g. Bzz Foundry) and sell it for license fees with no need to pay VMware anything.  The reality is that I could, but I would not do that and VMware knows that their own development teams will be the key to long term sustainability of this solution.  That said, a cloud service provider that wants to add Cloud Foundry on top of their OpenStack-based cloud could do so without any licensing fees.  I can be part of the “Cloud Foundry Federation” without having to be a vCloud VSPP provider.

Special Teams: Cloud Foundry is deployable in an enterprise private cloud, a public cloud, or what they call a “micro cloud” model (to run on a laptop for development).  I suspect they will have a very strong licensing and maintenance business for the enterprise versions of Cloud Foundry.  They’ll also get support and maintenance fees from many cloud service providers who see the value in paying for it.  Of course, CloudFoundry.com is a service itself, which may be a problem for other cloud service providers to join the federated model.  This is something they will need to think about – EMC Atmos Online eventually had to be closed to new customers based on push back from other service providers who were looking to also be in the cloud storage business.  It’s hard to get service providers to use your stuff if you’re competing against them.

Just over a year ago I argued that VMware should “Run a Cloud…” as one of their options.  In fact, I predicted a Spring is the key to them being a cloud provider:

Their alternative at that point is to offer their own cloud service to capture the value from their enterprise relationships and dominant position.  They can copy the vertically integrated strategy of Microsoft to make push-button deployment to their cloud service from both Spring and vCenter.

Gartner’s Chris Wolf is following a similar line of thinking, especially when you add last week’s EMC -> VMware Mozy transfer.

So where does that leave Team CloudFoundry?

For now, they are on the field, in the game, and playing like winners.  Let’s see if they can march down the field before the defense gets into a position of strength.

——-

(c) 2011 CloudBzz / TechBzz Media, LLC.  All rights reserved.  This post originally appeared at http://www.cloudbzz.com/seamicro-atom-and-the-ants/. You can follow CloudBzz on Twitter @CloudBzz.

 

 

 

Tagged , , , , , , ,

SeaMicro: Atom and the Ants

How the Meek Shall Inherit The Data Center, Change The Way We Build and Deploy Applications, And Kill the Public Cloud Virtualization Market

The tiny ant. Capable of lifting up to 50 times its body weight, an ant is an amazing workhorse with by far the highest “power to weight” ratio of any living creature. Ants are also among the most populous creatures on the planet. They do the most work as well – a bit at a time Ants can move mountains.

Atom chips (and ARM chips too) are the new ants of the data center. They are what power our smartphones, tablets and ever more consumer electronics devices. They are now very fast, but surprisingly thrifty with energy – giving them the highest computing power to energy weight ratio of any microprocessor.

I predict that significantly more than half of new data center compute capacity deployed in 2016 and beyond will be based on Atoms, ARMs and other ultra-low-power processors. These mighty mites will change much about how application architectures will evolve too. Lastly, I seriously believe that the small, low-power server model will eliminate the use of virtualization in a majority of public cloud capacity by 2018. The impact in the enterprise will be initially less significant, and will take longer to play out, but in the end it will be the same result.

So, let’s take a look at this in more detail to see if you agree.

This week I had the great pleasure to spend an hour with Andrew Feldman, CEO and founder of SeaMicro, Inc., one of the emerging leaders in the nascent low-power server market. SeaMicro has had quite a great run of publicity lately, appearing twice in the Wall Street Journal related to their recent launch of their second-generation product – the SM10000-64 based on a new dual-core 1.66 GHz 64-bit Atom chip created by Intel specifically for SeaMicro.

SeaMicro: 512 Cores, 1TB RAM, 10 RU

Note – the rest of this article is based on SeaMicro and their Atom-based servers.  Calxeda is another company in this space, but uses ARM chips instead.

These little beasties, taking up a mere 10 rack units of space (out of 42 in a typical rack), pack an astonishing 256 individual servers (512 cores), 64 SATA or SSD drives, up to 160GB of external network connectivity (16 x 10GigE), and 1.024 TB of DRAM. Further, SeaMicro uses ¼ of the power, ¼ the space and costs a fraction of a similar amount of capacity in a traditional 1U configuration. Internally, the 256 servers are connected by a 1.28 Tbps “3D torus” fabric modeled on the IBM Blue Gene/L supercomputer.

The approach to using low-power processors in a data center environment is detailed in a paper by a group of researchers out of Carnegie Mellon University. In this paper they show that cluster computing using a FAWN (“Fast Array of Wimpy Nodes”) approach, overall, are “substantially more energy efficient than conventional high-performance CPUs” at the same level of performance.

The Meek Shall Inherit The Earth

A single rack of these units would boast 1,024 individual servers (1 CPU per server), 2,048 cores (total of 3,400 GHz of compute), 4.1TB of DRAM, and 256TB of storage using 1TB SATA drives, and communicate at 1.28Tbps at a cost of around half a million dollars (< $500 per server).

$500/server – really? Yup.

Now, let’s briefly consider the power issue. SeaMicro saves power through a couple of key innovations. First, they’re using these low power chips. But CPU power is typically only 1/3 of the load in a traditional server. To get real savings, they had to build custom ASICs and FPGAs to get 90% of the components off of a typical motherboard (which is now the size of a credit card, with 4 of them on each “blade”). Aside from capacitors, each motherboard has only three types of components – the Atom CPU, DRAM, and the SeaMicro ASIC. The result is 75% less power per server. Google has stated that, even at their scale, the cost of electricity to run servers exceeds the cost to buy them. Power and space consumes >75% of data center operating expense. If you save 75% of the cost of electricity and space, these servers pay for themselves – quickly.

If someone just gave you 256 1U traditional servers to run – for free – it would be far more expensive than purchasing and operating the SeaMicro servers.

Think about it.

Why would anybody buy traditional Xeon-based servers for web farms ever again? As the saying goes, you’d have to pay me to take a standard server now.

This is why I predict that, subject to supply chain capacity, more than 50% of new data center servers will be based on this model in the next 4-5 years.

Atoms and Applications

So let’s dig a bit deeper into the specifics of these 256 servers and how they might impact application architectures. Each has a dual-core 1.66GHz 64-bit Intel Atom N570 processor with 4GB of DRAM. These are just about ideal Web servers and, according to Intel, the highest performance per watt of any Internet workload processer they’ve every built.

They’re really ideal “everyday” servers that can run a huge range of computing tasks. You wouldn’t run HPC workloads on these devices – such as CAD/CAM, simulations, etc. – or a scale-up database like Oracle RAC. My experience is that 4GB is actually a fairly typical VM size in an enterprise environment, so it seems like a pretty good all-purpose machine that can run the vast majority of traditional workloads.

They’d even be ideal as VDI (virtual desktop servers) where literally every running Windows desktop would get their own dedicated server. Cool!

Forrester’s James Staten, in a keynote address at CloudConnect 2011, recommended that people write applications that use many small instances when needed vs. fewer larger instances, and aggressively scale down (e.g. turn off) their instances when demand drops. That’s the best way to optimize economics in metered on-demand cloud business models.

So, with a little thought there’s really no need for most applications to require instances that are larger than 4GB of RAM and 1.66GHz of compute. You just need to build for that.

And databases are going this way too. New and future “scale out” database technologies such as ScaleBase, Akiban, Xeround, dbShards, TransLattice, and (at some future point) NimbusDB can actually run quite well in a SeaMicro configuration, just creating more instances as needed to meet workload demand. The SeaMicro model will accelerate demand for scale-out database technologies in all settings – including the enterprise.

In fact, some enterprises are already buying SeaMicro units for use with Hadoop MapReduce environments. Your own massively scalable distributed analytics farm can be a very compelling first use case.

This model heavily favors Linux due to the far smaller OS memory footprint as compared with Windows Server. Microsoft will have to put Windows Server on a diet to support this model of data center or risk a really bad TCO equation. SeaMicro is adding Windows certification soon, but I’m not sure how popular that will be.

If I’m right, then it would seem that application architectures will indeed be impacted by this – though in the scheme of things it’s probably pretty minor and in line with current trends in cloud.

Virtualization? No Thank You… I’ll Take My Public Cloud Single Tenant, Please!

SeaMicro claims that they can support running virtualization hosts on their servers, but for the life of me I don’t know why you’d want to in most cases.

What do you normally use virtualization for? Typically it’s to take big honking servers and chunk them up into smaller “virtual” servers that match application workload requirements. For that you pay a performance and license penalty. Sure, there are some other capabilities that you get with virtualization solutions, but these can be accomplished in other ways.

With small servers being the standard model going forward, most workloads won’t need to be virtualized.

And consider the tenancy issue. Your 4GB 1.66GHz instance can now run on its own physical server. Nobody else will be on your server impacting your workload or doing nefarious things. All of the security and performance concerns over multi-tenancy go away. With a 1.28 Tbps connectivity fabric, it’s unlikely that you’ll feel their impact at the network layer as well. SeaMicro claims 12x available bandwidth per unit of compute than traditional servers. Faster, more secure, what’s not to love?

And then there’s the cost of virtualization licenses. According to a now-missing blog post on the Virtualization for Services Providers blog (thank you Google) written by a current employee of the VCE Company, the service provider (VSPP) cost for VMware Standard is $5/GB per month. On a 4GB VM, that’s $240 per year – or 150% the cost of the SeaMicro node over three years! (VMware Premier is $15/GB, but in fairness you do get a lot of incremental functionality in that version). And for all that you get a decrease in performance having the hypervisor between you and the bare metal server.

Undoubtedly, Citrix (XenServer), RedHat (KVM), Microsoft (Hyper-V) and VMware will find ways to add value to the SeaMicro equation, but I suspect that many new approaches may emerge that make public clouds without the need for hypervisors a reality. As Feldman put it, SeaMicro represents a potential shift away from virtualization towards the old model of “physicalization” of infrastructure.

The SeaMicro approach represents the first truly new approach to data center architectures since the introduction of blades over a decade ago. You could argue – and I believe you’d be right – that low-power super-dense server clusters are a far more significant and disruptive innovation than blades ever were.

Because of the enormous decrease in TCO represented by this model, as much as 80% or more overall, it’s fairly safe to say that any prior predictions of future aggregate data center compute capacity are probably too low by a very wide margin. Perhaps even by an order of magnitude or more, depending on the price-elasticity of demand in this market.

Whew! This is some seriously good sh%t.

It’s the dawn of a new era in the data center, where the ants will reign supreme and will carry on their backs an unimaginably larger cloud than we had ever anticipated. Combined with hyper-efficient cloud operating models, information technology is about to experience a capacity and value-enablement explosion of Cambrian proportions.

What should you do? Embrace the ants as soon as possible, or face the inevitable Darwinian outcome.

The ants go marching one by one, hurrah, hurrah…

——————

(c) 2011 CloudBzz / TechBzz Media, LLC.  All rights reserved.  This post originally appeared at http://www.cloudbzz.com/seamicro-atom-and-the-ants/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , , , , , ,

BlueLock Takes an IT-Centric Cloud Approach to Hybrid Cloud

A couple months back I had a chance to catch up with Pat O’Day, CTO at BlueLock. They are a cloud provider headquartered in Indianapolis with two data centers (a primary and a backup), and also cloud capabilities on Wall Street and in Hong Kong for specific customers.

BlueLock has been a vCloud service provider for the past year and has taken an enterprise IT-centric approach to their cloud services. They are not going after the SMB web hosting market, and don’t want to sell to everybody. Their primary focus is on mid-tier enterprises looking for a provider that will deliver cloud in a way that integrates with customer environments – what you might expect from a managed services provider.

Initially they just provided private clouds, really just dedicated VMware environments with a vCenter front end. Their clouds now are still mostly private, with the user able to control what level of multi-tenancy they want. They do this through three models:

- Pay as you go multitenant
- Reserved multitenant at a lower cost
- Committed single-tenant dedicated infrastructure

 

For multi-tenant users they implemented vCloud Director as the UI. When showing this to their customers, they got feedback that Director was too unfamiliar when compared to vCenter. This gave them the idea to create a plug-in to vCenter that would allow VMware administrators to control their cloud resources.

Their plug-in was enabled by the fact that vCloud Director provides a full implementation of the vCloud API. This model has proven to be very popular with their customers. It was also very innovative.

In addition to starting and stopping cloud instances, users can move applications to BlueLock’s cloud and back again. As O’Day explained it, a vCenter administrator can create vApps from workloads running in their data center and use vCenter to deploy it up to the cloud – and to repatriate it again if necessary.

Contrast this with most cloud providers. Some, like Amazon and Rackspace, require you to package up your applications and move them to the cloud with a lot of manual processing. Amazon now can import VMDKs, but that only gets you instances – not whole apps. Other service providers, including most who target the enterprise, have “workload onboarding” processes that generally require IT to package up their VMware images and let the provider manage the import. Sometimes this is free, sometimes there may be an onboarding charge. BlueLock’s approach makes it easy and under the control of IT for workloads and data to be migrated in both directions.

VMware recently announced vCloud Connector to perform essentially the same function. But to my knowledge BlueLock remains one the few – if not the only- production cloud with this type of capability deployed.

While we all love to cite Amazon’s velocity of innovation, BlueLock has shown that even smaller providers can deliver very innovative solutions based on listening closely to customer requirements. While most people out there today are just talking about hybrid clouds, BlueLock is delivering.

HP Cloud Strategy? No So Much…

At Interop this week I met with Doug Oathout, VP of Converged Infrastructure at HP.  It’s often been very frustrating trying to figure out if HP really has a cloud strategy and is poised to compete in this market.  While nobody would claim that HP is delivering any clarity on cloud right now, it sounds like they might be moving down the path a bit and a more comprehensive strategy might someday emerge.

What Doug talked about first was the economic value of a converged infrastructure (naturally).  In this regards they are positioning against Cisco and the broader VCE Coalition with particular emphasis on openness vs. the more prescriptive VCE approach (any hypervisor vs. VMware only, automation tooling that crosses into legacy environments, etc.).  Cisco might say that the downside of supporting that level of openness is complexity and increased cost.  We’ll let them duke that out but it’s clear that a market that used to be fragmented (storage, servers, networking, etc. sold by different vendors and integrated at the customer) has tilted towards more integrated and verticalized infrastructures that result in far fewer components and much less work to deploy.  I had to wonder if there was an opportunity for someone to do the same thing with commodity gear targeting the mass-market service provider space.

As for cloud offerings, there seem to be only three at the moment (at least that I was able to learn about in this meeting).

The first is private clouds built from their Matrix converged infrastructure and Cloud Service Automation (CSA) tools bundle (an integrated set of Opsware and other tools).  I guess I’d characterize this as IBM’s CloudBurst circa 2009 and Unisys’ Secure Private Cloud, but with a weaker story on cloudy capabilities such support for multi-tenancy, scaling out and more.  It’s the “cloud-in-a-box” approach.

Their second cloud offering is a quick-start service (“CloudStart“) to roll out a simple “cloud in a box” solution on customer premise in 30 days. Obviously that’s kind of a bunch of hype because the process changes, integrations etc. you need to do to really drive value out of an enterprise cloud program take many months of deep effort.

Their third area is not really a defined offering.  They are doing services around some other cloud technologies, most notably Eucalyptus.  This is natural given the deficiencies in cloud functionality with their CSA-based approach.

Notably absent are any offerings out of their former EDS managed services unit.  Doug mentioned a Matrix Online offering for standing up short-term infrastructure blocks for testing purposes, but it’s not a cloud, isn’t multi-tenant even, and requires HP labor to do the provisioning.  Like I said, not a cloud (if it even exists – can’t find it on the HP site)

Meanwhile, it seems like IBM is not putting as much emphasis on the CloudBurst approach anymore, instead focusing on their Smart Business Development & Test public cloud offering.  Sources tell me that this offering is doing quite well and several months ago there were tweets about them having run out of capacity.  HP currently has no such offering.

The takeaway for me was that HP is making inching progress in a couple areas of their business, but no discernible progress on driving a delivering a comprehensive, aligned and compelling enterprise cloud story to the market.  Looks like we’ll be waiting for a bit longer…

Tagged , , , , ,

Savvis Offers Peek at Enterprise Cloud Future

I was first briefed on the Savvis Symphony VPDC (virtual private data center) back at Cloud Expo NYC in April of this year and had intended to post about it back then, or at least when they went live in July… so much for good intentions… They are starting to market this more heavily now, so perhaps it’s not a bad time to get this done because VPDC has a few innovations that are worth noting.

Perhaps the most interesting aspect of VPDC is their tiered QoS model.  Think about it.  Today clouds come in a one-sized-fits-all model.  You either have the open Amazon model with minimal SLAs and fairly opaque underlying infrastructure based on commodity gear, or you get the “enterprise cloud” model with higher SLAs (and costs) based on enterprise-grade gear. Or something in the middle.  But you can’t typically get two or three SLA/QoS configurations, with commensurate pricing, from the same cloud provider.

With VPDC, that’s exactly what you get.

click to enlarge

VPDC never goes to the Amazon level – with very low cost instances and no SLAs – but they do offer two QoS tiers today, with a third tier planned.

VPDC Essential is their starter level, with 99.9% SLA, best-effort QoS and inexpensive SATA-based storage. This is targeted at the dev/test use case.

VPDC Balanced is the mid-tier offering, with 99.99% SLA, VLANs, enterprise QoS on a 100 Mbps network, and 2-tier ILM storage.  They are targeting Balanced at the Web application use case.

VPDC Premier (planned) will have 99.995% SLAs, more VLAN provisioning, 1 Gbps network, and 3-tier storage for more “mission-critical” workloads.

As you move up, you get more prioritization of bandwidth, less storage contention, fewer VMDKs per LUN, faster drives, etc.

Savvis would not give me any pricing information, but clearly you will pay more for Premier and likely even the Essentials pricing will be significantly more than Amazon or Rackspace. Lack of pricing transparency puts them a bit at odds with AT&T (Synaptic pricing here) and Terremark (vCloud pricing here).  The only information I have I that pricing is hourly based on CPU, RAM and which operating system you are using (Microsoft’s SPLA fees presumably causing the difference).  Interestingly they are disclosing bandwidth fees and are charging for bandwidth like a hosting provider ($50/Mbps 95th percentile model) vs. the more typical straight per GB in/out metered model.

Savvis has no current intention of allowing credit card self-signup models for new users, even with their Essentials package.  This could be a mistake as so many projects start off with a very small buy and the Amex charge is easy to expense.  AT&T and Terremark might get those customers that don’t want to start with a sales rep, though the buyer seriousness is certainly better formed if they are willing to go through that pain.  By and large, making it easier to sign up could be in Savvis’ best interests.

What’s VPDC Made Of?

VPDC is an enterprise cloud based on VMware virtualization, Cisco UCS blades, Cisco Nexus switching, HP Opsware provisioning automation, Compellent SAN storage, and other technologies – okay, good enterprise-grade stuff.  Savvis relies heavily on deep integration with UCS and Nexus to get the QoS tiering to work with VMware.  They also rely heavily on the flexibility of Compellent’s “Fluid Data Storage” virtualized storage software.  All images are monitored using TripWire and connectivity can include MPLS and VPNs.

What’s VPDC Mean for the Cloud Market?

Amazon, Rackspace and others have grown largely on the backs of Web developers, SMBs, and enterprise usage outside the control of corporate IT.  This is but a fraction of the potential future market as the enterprise moves more and more to the cloud.  Enterprise IT buyers are much more precise and demanding when it comes to infrastructure than most Web and game developers.  Having tiered SLA/QoS levels, with pricing to match, might become an important consideration for even the mass-market cloud providers if they want to win in the enterprise.  The market is just too big to ignore.

Alternatively, you could see the mass-market guys go the opposite route – adding the “five nines” SLAs and high QoS capability to their core commodity-priced offerings.  This is a technology and scale issue that Amazon and Google are probably in a good position to leverage.   After all, if you can get VPDC Balanced for the price of EC2 reserved instances, it will be pretty hard for Savvis to compete.  But that’s a big if.

In any case, Savvis has done a nice job leveraging the technology now available to create a differentiated offering based on QoS and SLAs.  VPDC is available through data centers in the U.S. and U.K.

Tagged , ,

VMware Should Run a Cloud or Stop Charging for the Hypervisor (or both)

I had a number of conversations this past week at CloudConnect in Santa Clara regarding the relative offerings of Microsoft and VMware in the cloud market.  Microsoft is going the vertically integrated route by offering their own Windows Azure cloud with a variety of interesting and innovated features.  VMware, in contrast, is focused on building out their vCloud network of service providers that would use VMware virtualization in their clouds. VMware wants to get by with a little help from their friends.

The problem is that few service providers are really VMware’s friend in the long run.  Sure, some enterprise-oriented providers will provide VMware capabilities to their customers, but it is highly likely that they will quickly offer support for other hypervisors (Xen, Hyper-V, KVM).  The primary reason for this is cost.  VMware charges too much for the hypervisor, making it hard to be price-competitive vs. non-VMware clouds.  You might expect to see service providers move to a tiered pricing model where the incremental cost for VMware might be passed onto the end-customers, which will incentivize migration to the cheaper solutions.  If they want to continue this channel approach but stop enterprises from migrating their apps to Xen, perhaps VMware needs to give away the hypervisor – or at least drop the price to a level that it is easy to absorb and still maintain profitability ($1/month per VM – billed by the hour at $0.0014 per hour plus some modest annual support fee would be ideal).

Think about it… If every enterprise-oriented cloud provider lost their incentive to go to Xen, VMware would win.  Being the default hypervisor for all of these clouds would provide even more incentive for enterprise customers to continue to adopt VMware for internal deployments  (which is where VMware makes all of their money).  Further, if they offered something truly differentiated (no, not vMotion or DRS), then they could charge a premium.

If VMware does not make this change, I believe that they can kiss their position in the cloud goodbye in the next 2 years or so.  Their alternative at that point is to offer their own cloud service to capture the value from their enterprise relationships and dominant position.  They can copy the vertically integrated strategy of Microsoft to make push-button deployment to their cloud service from both Spring and vCenter.  This has some nice advantages to them culturally as well.  VMware has a reasonably large enterprise sales force (especially when combined with EMC’s…), and these high-paid guns are unlikely to get any compensation when a customer migrates to Terremark.  There’s a separate provider sales force that does get paid.  If VMware created their own managed service and compensated their direct reps to sell it, adoption would soar.  With their position in the developer community via the Spring acquisition, they’ll get some easy low-hanging fruit as well. 

Now, put these concepts together – free hypervisor and managed offering.  Would they lose their services providers?  I doubt it.  Enterprises want choices while continuing to use what they already know.  Terremark, Savvis, and others will have good marketing success with VMware as long as it doesn’t break their financial model.  Further, VMware’s “rising tide” would actually float all of the other VMware-based service providers and help them to better position against and compete with the Xen-based mass-market clouds.  A “VMware Inside” campaign that actually promoted other service providers would also help. 

Being in the managed services space is a very different business for VMware.  The margins are lower, but they could build a very large and profitable cloud offering with their position in the enterprise.  Similarly, a unified communications service based on Zimbra would give them even more value to sell (and to offer through vCloud partners).  As long as they remove the financial incentive for providers to switch to Xen at the same time, they could have a very strong play in this space.

If VMware does not at least make the pricing change for service providers, their future in the cloud is very much at risk. 

p.s. While they’re at it, VMware needs to allow us to integrate directly with ESX and get rid of vCenter in a service provider environment.

Tagged

Skytap Goes Deep in Networks

Skytap is known as a cloud dev/test provider today, but they have been seeing more workloads coming on-board including ERP migration, training, demos, etc.  So perhaps they are not as targeted as we think.  This can be a risk, where customers start to wonder what you stand for.  Skytap entered in the dev/test market, and they did not have a grand plan to expand from this.  It’s being driven by customers. So,

Today they are announcing “multi-network” capabilities to enable multi-level network topologies to be as flexible as your on-premise networks.  Only it’s a lot easier – you can deploy this in a browser.  It even allows you to save configurations and check them in and out of the Skytop repository.  This is their “virtual private cloud” capability.  This goes significantly beyond the current Amazon VPC solution with much more flexibility and configurability.  It’s a lot less work to set this up than with Amazon VPC, which basically assumes that a developer is part of the network team. 

Skytap

 

Skytap is basically claiming the ability to enable the kinds of networks shown above.  This is a nice differentiator and makes it easier for enterprises to move multiple complex workloads, like SAP and other multi-tier applications, to a cloud.

Fair Weather Forecasted for Regional Clouds

I think we might be at the very beginning of an interesting new phase in the evolution of cloud computing — regional and local clouds.  Local and regional hosting is nothing new – there have been smaller players operating in the shadows of the big hosting companies for years.  Some of these organizations are resellers of larger data center capacity, while others have their own facilities.  It’s only natural that some of these local hosters may start new cloud initiatives to keep their customers from ending up at Amazon, Rackspace or Unisys.  Some will be successful, while others will fail, but no matter how it turns out – there will be a robust local cloud economy.

ReliaCloudI spent some time with Jason Baker and Johnny  Hatch from ReliaCloud last week while I was out in Minnesota.  ReliaCloud is an offshoot of Visi, a St. Paul-based hosting and colocation provider.  Recognizing that the traditional managed hosting and colocation market is being eclipsed by cloud computing, the Visi team decided join the cloud party.  They decided to offer their cloud under a new brand, ReliaCloud, as a way to be more interesting to the market.  Here’s what else I learned:

  • The value proposition for a local cloud is strong.  Customers who want to keep tabs on where their applications are running, need to be able to audit their cloud service provider, need custom configurations, or just want the extra comfort of knowing the people who are running the cloud, will find a local cloud to be attractive. In some cases, regulatory issues favor a local cloud as well.  In healthcare or financial services, the fact that you know for sure where your data is at all times is comforting. 
  • Building a local cloud is not a huge challenge given the tools available today.  ReliaCloud uses VMOps, a venture-backed cloud stack provider out of Cupertino, CA.  VMOps has a pretty strong solution that allowed ReliaCloud to get up quickly with a minimum of hassle.  Jason and Johnny both said that VMOps was very responsive and delivered on all of their commitments.  Other tools that ReliaCloud evaluated included VMware, Xen Cloud from Citrix, Eucalyptus and others.  They just felt that VMOps was stronger and offered more of what ReliaCloud was looking for.  They really like how VMOps deals with networking and storage, and how robust their HA VMs are (based on Xen this release but with plans for other hypervisors in v2).  ReliaCloud built their own front end for self-service on top of VMOps APIs, though they commented that using what VMOps had out of the box would have saved them a lot of time.  They use the Tucows Platypus billing system, monitor their cloud with Nagios, and their storage is based on OpenSolaris ZFS managing Dell storage shelves (VMOps is adding more enterprise storage options too). 
  • A local cloud can have quick success, especially if you already have customers you can turn to.  ReliaCloud has been in beta for a little over 2 months.  During their beta period, ReliaCloud is free and nearly 100 customers have signed up.  In some cases, customers are running live production systems already and getting a lot of the benefits of cloud computing early on.  ReliaCloud is not sitting still either.  They just hosted a joint seminar with enStratus (Minneapolis-based cloud tool provider), are offering free cloud computing to nonprofits (doing good for good PR), and are exploring providing private label cloud services to integrators and other hosters who don’t want to do it themselves.

boulders_01_jpg4c3f2df0-0b97-4f2d-8352-9be19a2189dbLargeReliaCloud is still early in their journey to the cloud, but are already having great success (I happen to know that in December one of the major vCloud-based telco clouds in the U.S. had less than 10 customers using their service).  An analogy I heard recently comes to mind…   Think of the big hosting companies as giant boulders.  They take up a lot of space, but they leave a lot of space between them for rocks, stones, pebbles and sand.  Local and regional clouds are there to fill the empty space, and there are a lot of them.

%d bloggers like this: