Category Archives: General

Rethinking a stale IT governance model for the cloud era

The rise of cloud computing is a perfect opportunity to rethink your IT governance model in a fundamental way.

Read the first part of this column for a discussion of how to reshape IT governance to add value, rather than hold a business back.

Many enterprises place prohibitions on the use of Amazon Web Services (AWS), Google and other cloud services, despite the overwhelming evidence that these platforms enable innovation infinitely more than most of internal IT, and are every bit as secure as current systems — often, more secure.

Rather than the five guiding principles of the COBIT framework that encourage these prohibitions, IT governance should have only one: Exceed stakeholder expectations for agility, innovation, quality and efficiency to drive business value creation.

How is that done while ensuring the proper allocation of capital, the right level of risk management, and while sticking to the service-level agreements demanded by the business? By throwing away convention and being bold; by inspiring your team to let go of preconceptions and fears so they will follow you; by removing people in your team that aren’t capable of adapting to this new role of IT.

Read the rest of this article here…

 

Modernizing the IT governance framework for fun and profit

An overhaul of your IT governance framework can be fun and profitable. This may sound a little far-fetched, but bear with me.

IT leaders, including chief information officers (CIOs), must ensure proper governance over all aspects of the IT estate. IT governance focuses on five core principles, according to theISACA’s COBIT 5 framework:

  1. Meeting stakeholders’ needs
  2. Covering the enterprise end-to-end
  3. Applying a single integrated framework
  4. Enabling a holistic approach
  5. Separating IT governance from management

The problem with the IT governance model is that the first principle is often a victim of the requisite processes and policies for the other four. This focus on how to implement the governance framework to control and reduce risk has a problem — it makes IT unwieldy and unable to meet the needs of stakeholders.

The governance of IT investments is a mess, and IT governance is killing enterprise innovation, according to a Harvard Business Review report. That’s a bit heavy-handed, but the core premise of the article is correct: Far too many IT investments are tactical and not driven by value creation for the business.

Read the rest of this article here…

 

Reshaping IT organizations to fulfill a DevOps strategy

DevOps is an exciting and far-reaching shift in IT delivery. The promises are tempting: radically higher productivity, lower cost and more reliable systems.

So I guess it’s finally time for your IT organization to get on the development-operations (DevOps) bandwagon, right? Everybody’s doing it, so don’t delay. The big question isn’t whether, but how and where do you start?

First, go out and hire some DevOps people. Wait, wait — DevOps isn’t a job.

Okay, so you should create a DevOps group or department, right? A great team of people you can train up on DevOps led by a director of DevOps. Hold on! DevOps isn’t a department or function either.

Well, if it isn’t a role, a function or a department, what is DevOps?

Read the rest of this article here…

 

 

How to revamp your IT operational plan without getting fired

Hey there, aren’t you the one who keeps telling everybody that the IT operations team is a service provider to the business? That you’re customer-centric? That you deliver high-quality technological capabilities for a fair price? Aren’t you the one who said that you know what it takes to satisfy the needs of developers and application owners and your IT operational plan is foolproof?

If that’s the case, then why are your “clients” taking as much business as they can to public clouds and other service providers? If your service is so awesome, why do the budget owners keep telling you you’re too slow, expensive and are lacking capabilities they can get elsewhere?

Read the rest of this article here…

The {Private} Cloud of Despair

“Oh ye of little faith!”  That’s kind of the reaction I have been getting from some of you to my last missive on the End of Private Clouds.

Perhaps it’s the definition of “not too distant future” that has people confused. There will absolutely be a continuing investment by vendors and their enterprise customers in building and deploying private cloud solutions. Some value will be realized and we can help make sure that happens. However, at some point most customers will fall into a pit of despair and abandon their private clouds BECAUSE THEY CAN’T KEEP UP.

How many enterprises write their own DBMS? SFA/CRM? Operating Systems, Networking, HRMS, ERP, or other systems? How many enterprises custom design their own servers and storage devices?  You know at some point early in the market many of them did just that. Then a vendor solution came along and made continuing their investment a bad business decision.

The fact is that Amazon, Google, Microsoft and (perhaps) IBM are proving that they can run data centers and clouds far more effectively than most enterprises. And it’s only going to get worse for the in-house IT team. The sheer scale, cost model, and ability to invest in R&D for data center technology and operations solutions just favors the bigger players and the gaps are only going to grow.

There’s been some recent noise about the gap between OpenStack hype and actual enterprise adoption. Perhaps many IT organizations are struggling with the tension between their desire to build and operate something as cool as a cloud and their understanding of the bleak reality they are facing.  The ladder to cloud success is very high – and only the undaunted will attempt the climb. The rest will despair of the Quixotic nature of such a quest and will move on to greener pastures.

(c) 2013 John Treadway / CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow me on Twitter @CloudBzz.

The End of Private Cloud – 5 Stages of Loss and Grief

It’s not today, or tomorrow, but sometime in the not too distant future the bulk of the on-premise private cloud market is going to shrivel into a little raisin and die. A very small number of very large companies will operate private clouds that will be, by an large, poor substitutes for the services available in public clouds. However, they will be good enough for these companies for some percentage of their workloads.

I have seen dozens of private cloud efforts by many large customers. Most are pretty weak shells of a cloud, not coming close to the economics or capabilities of even 2nd or 3rd tier public clouds. Comparing them to AWS, Azure or Google is like comparing my art work to a Picasso or Rembrandt. The only similarity is that I can still call mine art even if it’s atrocious. I can still call your cloud a cloud too – even if it’s expensive, inelastic, and lacking anything but the most basic of features. Some will be reasonable, but in the long run it’s a game you cannot win.

No matter how good you think you are, you’ll never have the resources, skills or need to be as good as Amazon. AWS deploys enough computing capacity every day to run Amazon.com when it was a $7B online retailer. How many servers will you rack and stack today? How many petabytes of storage will you deploy this weekend? How many features did you update this year (Hint – in just the first half of November, Amazon announced 27 enhancements, features or entirely new services!!!!).

In her seminal work, “On Death and Dying,” Elisabeth Kübler-Ross articulated the 5 Stages of Loss and Grief. I think it’s time to look at this for private clouds.

1. Denial and Isolation

Most large company IT organizations are in severe denial about that is going on in the public cloud market today. They think that if only they get their vCloud or OpenStack cloud up and running they can be just like Amazon. Or perhaps they still cling to the total fantasy that their internal data centers are somehow more secure than Amazon’s or Microsoft’s – companies that spend more money on InfoSec per day than most enterprises will invest over the next 5 years. The denial comes from a fear of change, fear of loss of position and career, or just ignorance. By the way – denial is a guaranty that the risk is real. Those who see the future have already shifted their careers to ride the wave instead of being destroyed by it.

2. Anger

One you start to understand what is happening, that your career plans and worldview are being overtaken by the cloud, it is natural to become angry and bitter. You’ll go out of your way to point out potential security or performance issues with public clouds, maybe blogging about the “what ifs” of outages and disruptions, attack vectors and dirty power grids. You can’t control this because your CEO is not going to give you $100M it will take to really build a private cloud. Or perhaps you’re a private cloud vendor looking for that exit that may never come. “Oh why did I waste my time on this market” you might cry when all of the exits have passed you by and you’re looking at an ever dwindling market with lots of dying startups trying to consume whatever oxygen is left. Perhaps you can jump on (off) the Cloud Liberation Freedom Front (aka “CLiFF”)…

3. Bargaining

The normal reaction to feelings of helplessness and vulnerability is often a need to regain control–

  • If only I had moved to public cloud sooner…
  • If only I had gotten better advice from IBM, HP, VMware, Oracle or Accenture…
  • If only we had tried to be more cloudy in our data center…

Secretly, we may make a deal with our higher power in an attempt to postpone the inevitable. This is a weaker line of defense to protect us from the painful reality. Do we pray for AWS to fail? Do we pray for a Google data center meltdown?

4. Depression

It’s over. Sadness and regret set in and we realize that there is nothing to be done. Our best laid plans are in ruins. The future looks bleak, where servers are getting older by the minute, turning off one by one in silent desolation. The staffing model for 2020 shows a drawdown to a skeleton crew just keeping alive the old legacy stuff that you can’t kill or migrate. It’s dull, sad, drudgery.

5. Acceptance

Not everyone will get here. Many have already, coming to the early conclusion that the future is and will be in the public clouds. Those that do get here before everybody else will have more opportunity, more reward, more fulfillment. The late arrivals may have to find other careers – like today’s laid-off mainframe programmer looking for a job at Facebook, it ain’t gonna happen dude. Many a former techie has found fulfillment and happiness in other fields – I even know one who went back to medical school and is a practicing oncologist. Pretty cool, eh? Even Julia Child didn’t start cooking until she was 50 – so your second career is nothing to fear!

In any event, once you understand that the public cloud is the future – and when you are over the denial, anger, bargaining and depression – you can start to make plans.

CIOs should start getting ahead of the curve, thinking very hard about whether or not that new data center plan is worth the investment. Why spend $30 million, or $300 million, on a fancy new data center that may lay empty in a decade?  Instead of investing $5m in a new private cloud, how about investing $5m in the InfoSec upgrades required to safely use a public cloud?

It’s only a matter of time. Resistance is indeed futile. The public cloud is the future.

(c) 2013 John Treadway / CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow me on Twitter @CloudBzz.

It’s All About SDN

By Ben Grubin

HP’s announcement last week at Interop that they are shipping their SDN SDK and SDN App Store is merely one of the first salvos of a war that will likely heat up over the next 24 months. What was once the purview of marketing and start-ups, Software Defined Networking has now become the dominant strategy of HP, VMware and others to truly disrupt the current state of data center network architecture.

HP SDN Ecosystem

As they announced a few months ago, VMware is going further with their NSX-based SDDC (Software Designed Data Center) concept, which essentially treats the entire underlying network infrastructure as dumb pipes.

In this new world of SDN and SDDC, the never-ending list of features that Cisco, HP, Dell, Huawei, and others have used as the lynchpin of their competitive strategy in the Ethernet switching and routing markets is nearly irrelevant. Instead, what these new technologies demand is simplicity and speed, something incompatible with layering on hundreds of unnecessary features into the software that drives Ethernet switches.

In fact, layering is the underlying story here. While most network architects have tried to avoid networking overlays due to complexity and losing visibility into layer 2 and 3 architecture, SDN and SDDC are truly a network overlay that abstracts away the entirety of the underlying physical network.

Implementing SDDC means only two basic requirements for the underlying network: it should have as few hops as possible between any two points, and as much “symmetric” capacity as possible–meaning the capacity should be equally large between any two points on the network. Only with this design do you enable the broadest possible freedom at the overlay SDDC layer.

What don’t you need? VLANs, layer 3 routing protocols (OSPF, IGRP), and other such mainstays of the data center. All of this is handled inside the software layer, and with VMware NSX the virtual infrastructure.

All in all, this is an exciting movement towards simplifying the network layer and making it more agile and responsive to the needs of business. Having per-VM virtualized network components such as load balancers, firewalls, and switches means less specialized equipment and less capital outlay in the racks.

Is all of this going to be in production tomorrow? No way. There’s still some key hardware and software challenges that need to be solved to equalize the performance equation. However, if history is our guide, it won’t take long for those to be conquered.

 

 

Oracle of the Cloud – Seek and Ye Shall Find

Oracle logoOracle & Cloud. Oil & Water. Never the twain shall mix. Or so it’s been until now.

Excluding SaaS offerings that were mostly acquired, Oracle has been largely absent from the cloud these past 7 years. However, one thing you can always count from Larry & Co is an uncanny ability to adapt, embrace and compete like hell when it matters. Coming from an 8-1 deficit to win 8 straight America’s Cup matches shows you just how much Ellison likes to win.

After years of ignoring or aggressively denying the importance of cloud computing, Oracle has finally demonstrated their credible progress with no less than 10 new offerings announced at Oracle Open World this week. There is still a fair amount of cloudwashing going on, but for the first time it is no longer fair to deride Oracle as cloud hype without substance. It was fun while it lasted though.

Oracle is embracing the public cloud with database, middleware, compute and storage offerings. Their compute solution, powered by the acquisition of Nimbula and Chris Pinkham, looks pretty reasonable at first glance. And storage built on OpenStack Swift is also pretty leading edge. Multiple DBaaS offerings and a cloud-extended database backup appliance will probably be well-received by Oracle’s customer base.

In the private cloud, Oracle is starting to make some progress as well. I wouldn’t use them to build private IaaS clouds at this point, but they are selling an IaaS-in-box “engineered system” that might get some users. What’s more interesting is their database consolidation play which is being offered to major enterprises through an Exadata DBaaS offering that can be run in customer data centers. A very solid customer case from UBS shows that this is real.

Another interesting area is in the middle tier with the availability of Dynamic Clusters in WebLogic 12c. Like a good PaaS environment (which this is not), the ability to seamlessly (and with preset constraints) perform horizontal scaling of workloads is pretty interesting. Application changes might be required, and I don’t believe that multi-geo scaling would work with their model without significant code changes, but it’s a good start at enterprise PaaS functionality.

I came to the Oracle [Open World] seeking truth and wisdom on the cloud but expecting very little. To Oracle’s credit, they have exceeded my expectations. If you are an Oracle client or partner, it’s time to take a look at their cloud story to see how it might fit with your plans. I’d still be wary of some of their claims and don’t believe that they will be able to meet all of your needs, but at least they are in the game and competing. And we all know what happens when Ellison chooses to compete.

Getting Ready for the Cloud

by Ben Grubin

Whether you have a handful of applications of thousands of them, if some are not already running the cloud the idea has likely been discussed. Most people agree there are large numbers of applications that should be relatively easy to migrate to cloud infrastructure, yet most still haven’t made the jump to cloud. Why?

A few years ago, I remember writing about the immaturity of public cloud services. My thinking then was that building a private cloud and migrating your applications to it internally would build institutional knowledge (capabilities, policies, experience, etc) necessary for migrating and operating applications in the a public cloud while radically simplifying storage and network issues. These days most companies still haven’t made it that far even though the maturity of public cloud has grown by leaps and bounds. In fact, public cloud maturity has come so far that the question has become not whether to migrate applications to the public cloud, but how many and to which cloud?

In hindsight, it’s pretty easy to see that leaping into cloud (private OR public) a few years ago was a pretty risky and expensive proposition. Most enterprises made the right choice when they elected to sit tight, leverage virtualization to reduce wasted hardware and consolidate data centers (or at least reduce the growth of hardware), and keep a weather eye on this “cloudy” stuff. But now, with a maturing IaaS cloud market, is it time to jump in?

Sorta.

While public clouds are maturing, the question of which public cloud can be tricky. Yes, Amazon AWS currently has the lion’s share of the market, but the lower left corner of the Gartner Magic Quadrant for IaaS is very crowded, with new entrants daily. Furthermore, some IT behemoths are just piling into this market: see Tuesday’s announcement that Oracle is launching the Oracle Compute Cloud, intended as a competitive platform to AWS.

The answer may be to optimize your application for IaaS portability, rather than for a specific cloud environment. For example, decoupling services from the core application both helps an application become easier to scale horizontally, and frees you to change out underlying technologies in those services (like moving from sending your own email to using Amazon’s Simple Email Service).

Making your applications ready for the cloud now positions you to take greater advantage of the growing diversity of the public cloud ecosystem. Tackling changes today will make it a lot easier to move your apps when the time is right.

Cloud Expo NY Interview

Thoughts on cloud, Cloud Technology Partners, and PaaSLane

(c) 2013 John Treadway / CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow me on Twitter @CloudBzz.

theCUBE Interview at EMC World 2013

Here is a video of me discussing the current state of cloud providers and where the industry is going live inside theCUBE with Wikibon’s John Furrier and Stu Miniman from the floor of EMC World 2013 in Las Vegas.

Thank you to ServiceMesh for inviting me to speak.

(c) 2013 John Treadway / CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow me on Twitter @CloudBzz.

Measuring the Business Value of Cloud Computing

My favorite and least favorite question I get is the same – “Can you help me build a business case and ROI for cloud computing?”

Well, yes… and no. The issue is that cloud computing has such a massive impact on how IT is delivered that many of the metrics and KPIs that are typically used at many enterprises don’t capture it.  I mean, how do you capture Agility – really?

In the past I have broken this down into 3 buckets. Yes, some people have more but these are the big three…

Agility

Agility is reducing cycle time from ideation to product (or system delivery) – incredibly difficult to measure given that it’s hard to do apples to apples when every product/project is unique. You can do this in terms of Agile methodology task points and the number of points per fixed timeframe sprint on average over time. Most IT shops do not really measure developer productivity in any way at the moment so it’s pretty hard to get the baseline let alone any changes. Agility, like developer productivity, is notoriously difficult to quantify.  I have done some work on quantifying developer downtime and productivity, but Agility is almost something you have to take on faith. It’s the real win for cloud computing, no matter how else you slice it.

Efficiency

In a highly automated cloud environment with resource lifecycle management and open self-service on-demand provisioning, the impetus for long-term hoarding of resources is eliminated. Reclamation of resources, only using what you need today because it’s like water (cheap and readily available), coupled with moving dev/test tasks to public clouds when at capacity (see Agility above) can reduce the dev/test infrastructure footprint radically (50% or more). Further, elimination of manual processes will reduce labor as an input to TCO for IT. In a smaller dev/test lab I know of, with only 600 VMs at any given time, 4 FTE onshore roles were converted to 2 FTE offshore resources.

There’s a very deep book on this topic that came out recently from Joe Weiman called Cloudonomics (www.cloudonomics.com). One of the key points is to be able to calculate the economics of a hybrid model where your base level requirements are met with a fixed infrastructure and your variable demand above the base is met with an elastic model. A key quote (paraphrase) “A utility model costs less even though it costs more.”

The book is based on this paper — http://joeweinman.com/Resources/Joe_Weinman_Inevitability_Of_Cloud.pdf

And can be summarized as…

Inline image 1
Source: Joe Weiman in “Cloudonomics”

A hybrid model is the most cost-effective – which is “obvious” on the surface but now rigorously proven (?) by the math.

P = Peak.  T = Time.  U = the utility price premium.

If you add the utility pricing model in Joe Weiman’s work to some of the other levers I listed above, you get a set of interesting metrics here. Most IT shops will focus on this to provide the ROI only. They are the ones who are missing the key point on Agility. However, I do understand the project budgeting dance and if you can’t show an ROI that the CFO will bless, you might not get the budget unless the CEO is a true believer.

Quality

What is the impact of removing human error (though initially inserting systematic error until you work it through)? Many IT shops still provision security manually in their environments, and there are errors. How do you quantify the reputation risk of allowing an improperly secured resource be used to steal PII data?  It’s millions or worse. You can quantify the labor savings (Efficiency above), but you can also show the reduction in operational risk in IT through improved audit performance and easier regulatory compliance certification. Again, this is all through automation.

IT needs to get on the bandwagon and understand the fundamental laws of nature here — for 50-80% of your work even in a regulated environment, a hybrid utility model is both acceptable (risk/regulation) and desirable (agility, economics, and quality).

Do a Study?

The only way to break all of this down financially is to do a Value Engineering study and use this to do the business case. You need to start with a process review from the outside (developer) in (IT) and the inside (IT) out (production systems). Show the elimination of all of the manual steps.  Show the reduced resource footprint and related capex by eliminating hoarding behavior. Show reduced risk and lower costs by fully automating the provisioning of security in your environment. Show the “cloudonomics” of a hybrid model to offset peak demand and cyclicality or to eliminate or defer the expense of a new data center (that last VM with a marginal cost of $100 million anybody?).

History Lesson

In 1987 the stock market crashed and many trading floors could not trade because they lacked real-time position keeping systems. Traders went out and bought Sun workstations, installed Sybase databases, and built their own.  They didn’t wait for IT to solve the problem – they did it themselves.  That’s what happens with all new technology innovation.

The same thing happened with Salesforce.com. Sales teams just started using it and IT came in afterwards to integrate and customize it. It was obviously a good solution because people were risking IT’s displeasure by using it anyway.

If you really want to know if cloud computing really has any business value, take a look at your corporate credit card expenses and find out who in your organization is already using public clouds – with or without your permission. It’s time to stop calculating possible business value and start realizing actual business value from the cloud.

(c) 2012 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

IaaS Cloud Litmus Test – The 5 Minute VM

I will make this simple.  There is only one question you need to ask yourself or your IT department to determine if what you have is really an Infrastructure-as-a-Service cloud.

Can I get a VM in 5-10 minutes?

Perhaps a little bit more detailed?

Can a properly credentialed user, with a legitimate need for cloud resources, log into your cloud portal or use your cloud API, request a set of cloud resources (compute, network, storage), and have them provisioned for them automatically in a matter of a few minutes (typically less than 10 and often less than 5)?

If you can answer yes, congratulations – it’s very likely a cloud.  If you cannot answer yes it is NOT cloud IaaS. There is no wriggle room here.

Cloud is an operating model supported by technology.  And that operating model has as its core defining characteristic the ability to request and receive resources in real-time, on-demand. All of the other NIST characteristics are great, but no amount of metering (measured service), resource pooling, elasticity, or broad network access (aka Internet) can overcome a 3-week (or worse) provisioning cycle for a set of VMs.

Tie this to your business drivers for cloud.

  • Agility? Only if you get your VMs when you need them.  Like NOW!
  • Cost? If you have lots of manual approvals and provisioning, you have not taken the cost of labor out.  5 Minute VMs requires 100% end-to-end automation with no manual approvals.
  • Quality? Back to manual processes – these are error prone because humans suck at repetitive tasks as compared to machines.

Does that thing you call a cloud give you a 5 Minute VM?  If not, stop calling it a cloud and get serious about building the IT Factory of the Future.

“You keep using that word [cloud].  I do not think it means what you think it means.”

– The Princess Cloud

 

 

(c) 2012 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , , ,

Open Call to VMware – Commercialize Cloud Foundry Software!

After spending time at VMware and Cloud Expo last week, I believe that VMware’s lack of full backing for Cloud Foundry software is holding back the entire PaaS market in the enterprise.

Don’t get me wrong, there’s a lot of momentum in PaaS despite how very immature the market is. But this momentum is in pockets and largely outside of the core of software development in the enterprise. CloudFoundry.com might be moving along, but most enterprises don’t want to run the bulk of their applications in a public cloud. Only through the Cloud Foundry software layer will enterprises really be able to invest. And invest they will.

PaaS-based applications running in the enterprise data center are going to replace (or envelope) traditional app server-based approaches. It is just a matter of time due to productivity and support for cloud models. Cloud Foundry has the opportunity to be one of the winners but it won’t happen if VMware fails to put their weight behind it.

Some nice projects like Stackato from ActiveState are springing up around cfoundry, but the enterprises I deal with every day (big banks, insurance companies, manufacturers) will be far more likely to commit to PaaS if a vendor like VMware gets fully behind the software layer. Providing an open source software support model is fine and perhaps a good way to start. However, this is going to be a lot more interesting if VMW provides a fully commercialized offering with all of the R&D enhancements, etc.

This market is going to be huge – as big or bigger than the traditional web app server space. It’s just a matter of time. Cloud Foundry is dominating the current discussion about PaaS software but lacks the full support of VMware (commercial support, full productization). This is just holding people back from investing.  VMware reps ought to be including Cloud Foundry in every ELA, every sales discussion, etc. and they need to have some way to get paid a commission if that is to happen. That means they need something to sell.

VMware’s dev teams are still focused on making Cloud Foundry more robust and scalable. Stop! It’s far better to release something that’s “good enough” than to keep perfecting and scaling it.
“The perfect is the enemy of the good.” – Voltaire

It’s time for VMware to get with the program and recognize what you they and how it can be a huge profit engine going forward – but they need to go all in starting now!

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , ,

VMware’s OpenStack Hook-up

VMware has applied to join the OpenStack Foundation, potentially giving the burgeoning open source cloud stack movement a huge dose of credibility in the enterprise. There are risks to the community in VMware’s involvement, of course, but on the balance this could be a pivotal event. There is an alternative explanation, which I will hit at the end, but it’s a pretty exciting development no matter VMware’s true motivations.

VMware has been the leading actor for cloud computing in the enterprise. Most “private clouds” today run vSphere, and many service providers have used their VMware capabilities to woo corporate IT managers. While the mass-market providers like Amazon and Rackspace are built on open source hypervisors (typically Xen though KVM is becoming more important), the enterprise cloud is still an ESXi hypervisor stronghold.

Soapbox Rant: Despite the fact that most of the enterprise identifies VMware as their private cloud supplier, a very large majority of claimed “private clouds” are really nothing more than virtualized infrastructure. Yes, we are still fighting the “virtualization does not equal cloud” fight in the 2nd half of 2012. On the “Journey to the Cloud,” most VMware private clouds are still in the Phase I or early Phase II stages and nowhere near a fully elastic and end-to-automated environment driven by a flexible service catalog etc.

VMware’s vCloud program includes a lot of components, old and new, anchored by the vCloud Director (“vCD”) cloud management environment. vCD is a fairly rich cloud management solution, with APIs, and several interesting features and add-ons (such as vCloud Connector).

vCD today competes directly with OpenStack Compute (Nova) and related modules. However, it is not really all that widely used in the enterprises (I have yet to find a production vCD cloud but I know they exist). Sure, there are plenty of vCD installations out there, but I’m pretty sure that adoption has been nowhere near where VMware had hoped (queue the VMware fan boys).

From early days, OpenStack has supported the ESXi hypervisor (while giving Microsoft’s Hyper-V a cold shoulder). It’s a simple calculus – if OpenStack wants to operate in the enterprise, ESXi support is not optional.

With VMware’s overtures to the OpenStack community, if that is what this is, it is possible that the future of vCloud Director could be very tied to the future of OpenStack. OpenStack innovation seems to be rapidly outpacing vCD, which looks very much like a project suffering from bloated development processes and an apparent lack of innovation. At some point it may have become obvious to people well above the vCD team that OpenStack’s momentum and widespread support could no longer be ignored in a protectionist bubble.

If so, VMware should be commended for their courage and openness to support external technology that competes with one of their strategic product investments from the past few years. VMware would be joining the following partial list of OpenStack backers with real solutions in (or coming to) the market:

  • Rackspace
  • Red Hat
  • Canonical
  • Dell
  • Cloudscaling
  • Piston
  • Nebula
  • StackOps

Ramifications

Assuming the future is a converged vCD OpenStack distro (huge assumption), and that VMware is really serious about backing the OpenStack movement, the guys at Rackspace deserve a huge round of applause. Let’s explore some of the potential downstream impacts of this scenario:

  • The future of non-OpenStack cloud stacks is even more in doubt. Vendors currently enjoying some commercial success that are now under serious threat of “nichification” or irrelevancy, include: Citrix (CloudStack), Eucalyptus, BMC (Cloud Lifecycle Management), and… well, is there really anybody else? You’re either an OpenStack distro, and OpenStack extension, or an appliance embedding OpenStack if you want to succeed. At least until some amazingly new innovation comes along to kill it. OpenStack is to CloudStack as Linux is to SCO? Or perhaps FreeBSD?
    • Just weigh the non-OpenStack community against OpenStack’s “who’s who” list above. If you’re a non-OpenStack vendor and you are not scared yet, you may be already dead but just not know it.
    • As with Linux v. Unix, there will be a couple of dominant offerings and a lot of niche plays supporting specific workload patterns.  And there will be niche offerings that are not OpenStack.  In the long run, however, the bulk of the market will go to OpenStack.
  • The automation vendors (BMC, IBM, CA, HP) will need to embrace and extend OpenStack to stay in the game. Mind you, there is a LOT of potential value to what you can do with these tools. Patch management and compliance is just scratching the surface (though you can use Chef for that too, of course). Lots of governance, compliance, integration, and related opportunities for big markets here, and potentially all more lucrative and open to differentiated value. I’ve been telling my friends at BMC this for the past couple of years – perhaps I’ve got to get a bit more vociferous…
  • The OpenStack startups are in a pretty tough position right now. The OpenStack ecosystem has become it’s own pretty frothy and shark-filled “red ocean,” and the noise from the big guys – Rackspace, Red Hat, VMware, Dell, etc. – will be hard to overcome. I foresee a handful of winners, some successful pivots, and the inevitable failures (VCs invest in risk, right?). There are a lot of very smart people working at these startups, and at cloudTP we work with several of them, so I wouldn’t count any of them out yet. But in the long run, if the history of open source is any indicator, the market can’t support 10+ successful OpenStack software vendors.
  • Most importantly, it is my opinion that OpenStack WILL be the enterprise choice in the next 2-3 years. Vendors who could stop this – including VMware and Microsoft – are not getting it done (Microsoft is particularly missing the boat on the cloud stack layer). We’ll see the typical adoption curve with the most aggressive early adopters deploying OpenStack today and driving ecosystem innovation.
  • Finally, with the cloud stack battle all but a foregone conclusion, the battle for the PaaS layer is ripe for a blowout. And unlike the IaaS stack layer, the PaaS market will be a lot less commoditized in the near future.  There is so much opportunity for differentiation and innovation here that we will all have a lot to keep track of in the coming years.

Alternative Explanations

Perhaps I am wrong and the real motivation here for VMware is to tactically protect their interests in the OpenStack project – ESXi integration, new features tied to the vSphere roadmap, etc. The vCD team may also be looking to leverage the OpenStack innovation curve and liberal licensing model (Apache) to find and port new capabilities to the proprietary VMware stack – getting the benefit of community development efforts without having to invent them.

My gut tells me, however, that this move by VMware will lead to a long-term and strategic commitment that will accelerate the OpenStack in the Enterprise market.

Either way, VMware’s involvement in OpenStack is sure to change the dynamic and market for cloud automation solutions.

——-

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , , ,
%d bloggers like this: