Open Call to VMware – Commercialize Cloud Foundry Software!

After spending time at VMware and Cloud Expo last week, I believe that VMware’s lack of full backing for Cloud Foundry software is holding back the entire PaaS market in the enterprise.

Don’t get me wrong, there’s a lot of momentum in PaaS despite how very immature the market is. But this momentum is in pockets and largely outside of the core of software development in the enterprise. CloudFoundry.com might be moving along, but most enterprises don’t want to run the bulk of their applications in a public cloud. Only through the Cloud Foundry software layer will enterprises really be able to invest. And invest they will.

PaaS-based applications running in the enterprise data center are going to replace (or envelope) traditional app server-based approaches. It is just a matter of time due to productivity and support for cloud models. Cloud Foundry has the opportunity to be one of the winners but it won’t happen if VMware fails to put their weight behind it.

Some nice projects like Stackato from ActiveState are springing up around cfoundry, but the enterprises I deal with every day (big banks, insurance companies, manufacturers) will be far more likely to commit to PaaS if a vendor like VMware gets fully behind the software layer. Providing an open source software support model is fine and perhaps a good way to start. However, this is going to be a lot more interesting if VMW provides a fully commercialized offering with all of the R&D enhancements, etc.

This market is going to be huge – as big or bigger than the traditional web app server space. It’s just a matter of time. Cloud Foundry is dominating the current discussion about PaaS software but lacks the full support of VMware (commercial support, full productization). This is just holding people back from investing.  VMware reps ought to be including Cloud Foundry in every ELA, every sales discussion, etc. and they need to have some way to get paid a commission if that is to happen. That means they need something to sell.

VMware’s dev teams are still focused on making Cloud Foundry more robust and scalable. Stop! It’s far better to release something that’s “good enough” than to keep perfecting and scaling it.
“The perfect is the enemy of the good.” – Voltaire

It’s time for VMware to get with the program and recognize what you they and how it can be a huge profit engine going forward – but they need to go all in starting now!

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , ,

VMware’s OpenStack Hook-up

VMware has applied to join the OpenStack Foundation, potentially giving the burgeoning open source cloud stack movement a huge dose of credibility in the enterprise. There are risks to the community in VMware’s involvement, of course, but on the balance this could be a pivotal event. There is an alternative explanation, which I will hit at the end, but it’s a pretty exciting development no matter VMware’s true motivations.

VMware has been the leading actor for cloud computing in the enterprise. Most “private clouds” today run vSphere, and many service providers have used their VMware capabilities to woo corporate IT managers. While the mass-market providers like Amazon and Rackspace are built on open source hypervisors (typically Xen though KVM is becoming more important), the enterprise cloud is still an ESXi hypervisor stronghold.

Soapbox Rant: Despite the fact that most of the enterprise identifies VMware as their private cloud supplier, a very large majority of claimed “private clouds” are really nothing more than virtualized infrastructure. Yes, we are still fighting the “virtualization does not equal cloud” fight in the 2nd half of 2012. On the “Journey to the Cloud,” most VMware private clouds are still in the Phase I or early Phase II stages and nowhere near a fully elastic and end-to-automated environment driven by a flexible service catalog etc.

VMware’s vCloud program includes a lot of components, old and new, anchored by the vCloud Director (“vCD”) cloud management environment. vCD is a fairly rich cloud management solution, with APIs, and several interesting features and add-ons (such as vCloud Connector).

vCD today competes directly with OpenStack Compute (Nova) and related modules. However, it is not really all that widely used in the enterprises (I have yet to find a production vCD cloud but I know they exist). Sure, there are plenty of vCD installations out there, but I’m pretty sure that adoption has been nowhere near where VMware had hoped (queue the VMware fan boys).

From early days, OpenStack has supported the ESXi hypervisor (while giving Microsoft’s Hyper-V a cold shoulder). It’s a simple calculus – if OpenStack wants to operate in the enterprise, ESXi support is not optional.

With VMware’s overtures to the OpenStack community, if that is what this is, it is possible that the future of vCloud Director could be very tied to the future of OpenStack. OpenStack innovation seems to be rapidly outpacing vCD, which looks very much like a project suffering from bloated development processes and an apparent lack of innovation. At some point it may have become obvious to people well above the vCD team that OpenStack’s momentum and widespread support could no longer be ignored in a protectionist bubble.

If so, VMware should be commended for their courage and openness to support external technology that competes with one of their strategic product investments from the past few years. VMware would be joining the following partial list of OpenStack backers with real solutions in (or coming to) the market:

  • Rackspace
  • Red Hat
  • Canonical
  • Dell
  • Cloudscaling
  • Piston
  • Nebula
  • StackOps

Ramifications

Assuming the future is a converged vCD OpenStack distro (huge assumption), and that VMware is really serious about backing the OpenStack movement, the guys at Rackspace deserve a huge round of applause. Let’s explore some of the potential downstream impacts of this scenario:

  • The future of non-OpenStack cloud stacks is even more in doubt. Vendors currently enjoying some commercial success that are now under serious threat of “nichification” or irrelevancy, include: Citrix (CloudStack), Eucalyptus, BMC (Cloud Lifecycle Management), and… well, is there really anybody else? You’re either an OpenStack distro, and OpenStack extension, or an appliance embedding OpenStack if you want to succeed. At least until some amazingly new innovation comes along to kill it. OpenStack is to CloudStack as Linux is to SCO? Or perhaps FreeBSD?
    • Just weigh the non-OpenStack community against OpenStack’s “who’s who” list above. If you’re a non-OpenStack vendor and you are not scared yet, you may be already dead but just not know it.
    • As with Linux v. Unix, there will be a couple of dominant offerings and a lot of niche plays supporting specific workload patterns.  And there will be niche offerings that are not OpenStack.  In the long run, however, the bulk of the market will go to OpenStack.
  • The automation vendors (BMC, IBM, CA, HP) will need to embrace and extend OpenStack to stay in the game. Mind you, there is a LOT of potential value to what you can do with these tools. Patch management and compliance is just scratching the surface (though you can use Chef for that too, of course). Lots of governance, compliance, integration, and related opportunities for big markets here, and potentially all more lucrative and open to differentiated value. I’ve been telling my friends at BMC this for the past couple of years – perhaps I’ve got to get a bit more vociferous…
  • The OpenStack startups are in a pretty tough position right now. The OpenStack ecosystem has become it’s own pretty frothy and shark-filled “red ocean,” and the noise from the big guys – Rackspace, Red Hat, VMware, Dell, etc. – will be hard to overcome. I foresee a handful of winners, some successful pivots, and the inevitable failures (VCs invest in risk, right?). There are a lot of very smart people working at these startups, and at cloudTP we work with several of them, so I wouldn’t count any of them out yet. But in the long run, if the history of open source is any indicator, the market can’t support 10+ successful OpenStack software vendors.
  • Most importantly, it is my opinion that OpenStack WILL be the enterprise choice in the next 2-3 years. Vendors who could stop this – including VMware and Microsoft – are not getting it done (Microsoft is particularly missing the boat on the cloud stack layer). We’ll see the typical adoption curve with the most aggressive early adopters deploying OpenStack today and driving ecosystem innovation.
  • Finally, with the cloud stack battle all but a foregone conclusion, the battle for the PaaS layer is ripe for a blowout. And unlike the IaaS stack layer, the PaaS market will be a lot less commoditized in the near future.  There is so much opportunity for differentiation and innovation here that we will all have a lot to keep track of in the coming years.

Alternative Explanations

Perhaps I am wrong and the real motivation here for VMware is to tactically protect their interests in the OpenStack project – ESXi integration, new features tied to the vSphere roadmap, etc. The vCD team may also be looking to leverage the OpenStack innovation curve and liberal licensing model (Apache) to find and port new capabilities to the proprietary VMware stack – getting the benefit of community development efforts without having to invent them.

My gut tells me, however, that this move by VMware will lead to a long-term and strategic commitment that will accelerate the OpenStack in the Enterprise market.

Either way, VMware’s involvement in OpenStack is sure to change the dynamic and market for cloud automation solutions.

——-

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged , , ,

Google Compute Engine – Not AWS Killer (yet)

GCE Logo

(c) Google

Google launched their new “Google Compute Engine” yesterday at I/O. Here’s more info about GCE on the Google Developer’s Blog, and a nice analysis by Ben Kepes on CloudAve.  If “imitation is the sincerest form of flattery” then it’s clear the guys at Google hold the Amazon Web Services EC2 in very high regard. In many ways, GCE is a really good copy of EC2 circa 2007/2008. There are some differences – like really great encryption for data at rest and in motion – but essentially GCE is a copy of EC2 4-5 years ago.

GCE is missing a lot of what larger enterprises will need – monitoring, security certifications, integration with IAM systems, SLAs, etc. GCI also lacks some of the things that really got people excited about EC2 early on – like an AMI community, even the AMI model so I can create one from my own server image.

One of the key selling points that people are jumping on is pricing. Google claims 50% lower pricing, but that doesn’t hold against reserved instances at Amazon which are actually lower over time than GCE.  And price is rarely the primary factor in enterprise buying anyway. Plus, you have to assume that Amazon is readying a pricing response so whatever perceived advantage Google might have there will quickly evaporate

Other missing features that AWS provides today:

  • PaaS components – Relational Database Service (MySQL, SQL Server and Oracle), Elastic Map Reduce, CloudFront CDN, ElastiCache, Simple Queue Service, Simple Notification Service, Simple Email Service, Simple Email Service
  • Direct Connect – ability to run a dedicated network segment into AWS back to your data center
  • Virtual Private Cloud – secure instances that are not visible to public internet
  • Deployment tools – IAM, CloudWatch, Elastic Beanstalk, CloudFormation
  • Data Migration – AWS Import/Export via portable storage devices (e.g. sneaker net) for very large data sets
  • and others

Bottom line is that GCE is no AWS Killer. Further, I don’t think it ever will be. Even more – I don’t think that should be Google’s goal.

What Google needs to consider is how to create the 10x differentiation bar that any new startup must have.  Google Search was that much better than everybody else when it launched.  GMail crushed Yahoo Mail with free storage, conversation threads and amazingly fast responsiveness. Google Maps had AJAX, which blew away MapQuest and the others. And so on. You can’t just be a bit better and win in this market. You need to CRUSH the competition – and that ain’t happening in this case.

What would GCE have to offer to CRUSH AWS?

  • Free for production workloads up to a fairly robust level (like free GBs of GMail vs. Yahoo’s puny MBs, the ability to run most small apps for no cost at all would be highly disruptive to Amazon)?
  • A vastly superior PaaS layer (PaaS is the future – If I were rewriting The Graduate… “just one word – PaaS”)?
  • A ginormous data gravity well – think of if Google built a data store of every bit of real-time market data, trade executions, corporate actions, etc – they’d disrupt Bloomberg and Thomson Reuters too!  Or other data – what is the data that they can own (like GIS, but more broadly interesting) that can drive this
  • Enterprise SaaS suite tied to GCE for apps and extensions – what if Google bought SugarCRM, Taleo, ServiceNow and a dozen other SaaS providers (or built their own Google-ized versions of these solutions), disrupted the market (hello, Salesforce-like CRM but only free), and then had a great compute story?
  • A ton of pre-built app components (whatever they might be) available in a service layer with APIs?

No matter what the eventual answer needs to be, it’s not what I see on the GCE pages today. Sure, GCE is mildly interesting and it’s great that Google is validating the last 6 years of AWS with their mimicry, but if there’s ever going to be an AWS killer out there – this ain’t it.

Tagged , , , ,

Open Clouds at Red Hat

Red Hat has ben making steady progress toward what is shaping up as a fairly interesting cloud strategy.  Building on their Deltacloud API abstraction layer and their CloudForms IaaS software, a hybrid cloud model is starting to emerge. Add to this their OpenShift PaaS system, and you can see that Red Hat is assembling a lot of key components. Let’s add the fact that Red Hat has gotten very involved with OpenStack, providing an interesting dynamic with CloudForms.

Red Hat is the enterprise king in Linux (RHEL), strong in application servers (JBoss), and has a lot of very large customers.  Their VM environment, RHEV (aka KVM) won’t displace VMware in the enterprise space any time soon, but it is pretty interesting in the service provider space.

Red Hat’s community open source model will be very appealing to the market.  In fact, any of the OpenStack distro providers should be at least a bit worried that Red Hat might leapfrog them.  With their OpenStack move, CloudForms is being repositioned as a hybrid cloud management tool.  Now their competition in the future might be more along the lines of RightScale and enStratus.  What I’ve seen so far of CloudForms shows a lot of promise, though it’s still pretty immature.

Red Hat is pushing a message about “open clouds” – which is less about open source than it is about avoiding vendor lock in with cloud providers.  That’s something that CloudForms is intending to address.  It’s also why OpenShift has been released as an open source project (Apache 2.0 – yay) that can be deployed on other clouds and non-cloud infrastructures.

The big opportunity, IMO, is for Red Hat to go very strong on the OpenStack path for IaaS (e.g. release and support an enhanced Red Hat distro), really push their OpenShift conversation vs. Cloud Foundry based on their ability to drive community (along with it’s deep integration with JBoss), and move CloudForms further up the stack to a governance and multi-cloud management framework (their messaging on this is not very strong).  It’s this model of openness – any cloud, any app, that will make their “Open Cloud” vision a reality.

Tagged , ,

RACI and PaaS – A Change in Operations

I have been having a great debate with one of my colleagues about the changing role of the IT operations (aka “I&O”) function in the context of PaaS. Nobody debates that I&O is responsible and accountable for infrastructure operations.

Application developers (with or without the blessing of Enterprise Architecture) select platform components such as application servers, middleware etc.  I&O keeps the servers running – probably up to the operating system.  The app owners then manage their apps and the platform components.  I&O has no SLAs on the platform, etc.

In the PaaS era, I think this needs to change.  IT Operations (I&O) needs to have full accountability and responsibility for the OPERATION of the PaaS layer. PaaS is no longer a part of the application, but is now really part of the core platform operated by IT.  It’s about 24×7 monitoring, support, etc. and generally this is a task that I&O is ultimately best able to handle.

Both teams need to be accountable and responsible for the definition of the PaaS layer to ensure it meets the right business and operational needs.  But when it comes to operations, I&O now takes charge.

The implication of this will be a need for PaaS operations and administration skills in the I&O business.  It also means that the developers and application ownership teams need only worry about the application itself – and not the standard plumbing that supports it.

Result?  Better reliability of the application AND better agility and productivity in development.  That’s a win, right?

Cloud API Standardization – It’s Time to Get Serious

UPDATE 6/2

Given the recent losses by Oracle vs. Google in their copyright Java farce it looks like using the AWS APIs as a standard for the industry could actually work. Anybody want to take the lead and set up a Cloud API standards body and publish an AWS-compatible API spec for everybody to use??

——

Okay – this is easy… or is it?

Lots of people continue to perpetuate the idea that the AWS APIs are a de facto standard, so we should just all move on about it.  At the same time, everybody seems to acknowledge the fact that Amazon has never ever indicated that they want to be a true standard.  Are we reallyIn fact, they have played quite the coy game and kept silent luring potential competitors into a false sense of complacency.

Amazon has licensed their APIs to Eucalyptus under what I and others broadly assume to be a a hard and fast restriction to the enterprise private cloud market. I would not be surprised to learn that the restrictions went further – perhaps prohibiting Eucalyptus from offering any other API or claiming compatibility with other clouds.

Amazon Has ZERO Interest in Making This Easy

Make no mistake – Amazon cares deeply about who uses their APIs and for what purpose.  They use silence as a way to freeze the entire market.  If they licensed it freely and put the API into an independent governance body, we’d be done.  But why would they ever do this and enable easy portability to other public cloud providers?  You’re right – they wouldn’t. If Amazon came out and told everybody to bugger off, we’d also be done – or at least unstuck from the current stupidly wishful thinking that permeates this discussion.  Amazon likes us acting like the deer-in-the-headlights losers we all seem to be. Why? Because this waiting robs us of our will and initiative.

It’s Time to Create A Cloud API Standard

Do I know what this is or should be? Nope. Could be OpenStack API. It won’t be vCloud API. It doesn’t freaking matter. Some group of smart cloud platform providers out there should just define, publish, freely licence and fully implement a new standard cloud API.

DO NOT CREATE A CLOUD API STANDARDS ORG OR COMMITTEE. Just go do it, publish it under Creative Commons, commit to it and go. License it under Apache. And AFTER it gets adopted and there’s some need for governance going forward, then create a governance model (or just throw it under Apache). Then every tool or system that needs to access APIs has to only do it twice. Once for Amazon and once for the true standard.

Even give it a branding value like Intel Inside and make it an evaluation criteria in bids and RFPs. I don’t care – just stop treating AWS API as anything other than a tightly controlled proprietary API by the dominant cloud provider that you should NOT USE EVER (once there is a standard).

Take it one step forward – publish a library to translate the Standard API to AWS under an Apache license and get people to not even code AWS API into their tools.  We need to isolate AWS API behind a standard API wall.  Forever.

Then, and only then, perhaps we can get customers together and get them to force Amazon to change to the standard (which they will do if they are losing enough business but only then).

Eucalyptus and AWS – Much Ado About Nothing

UPDATED:  Eucalyptus announced a $30M financing round from a great group of VCs.  That will buy them some room but if they want to get a good return on the $55.5M they’ve raised, they’re going to need to hit it out of the park.  At least they’ll be busy spending all that green.

Yes, this is a delayed post.  But hey, I’m busy.

The Eucalyptus – AWS announcement last week was really a great case of Much Ado About Nothing.  Marten Mickos is a great marketer, and the positioning in this story was almost magical.  For a while there it seemed that Amazon had truly anointed Eucalyptus as “the private cloud” for the enterprise.

Here is 100% of the content behind this story as far as I can tell:  Amazon granted Eucalyptus a license to the AWS API and might provide some technical assistance.  That’s it – there is no more.

I got the release from Amazon and managed to have a quick email Q&A with an Amazon spokesperson (below).

1.  What’s new here?  Eucalyptus has had AWS-compatible APIs since the beginning.

This agreement is about making it simple to move workloads between customers’ on-premise infrastructure running Eucalyptus and AWS—all the while being able to use common tools between the environments.  That’s what Eucalyptus and AWS are working to make easier for customers.

As part of this agreement, AWS will be working with Eucalyptus as they continue to extend compatibility with AWS APIs and customer use cases.

2. Does this mean that Eucalyptus has been granted a license to use the AWS APIs verbatim (e.g. a copyright license)?  And if so, does that other cloud stacks would not be granted a license to legally use AWS APIs?

Yes, to your first question.  Each situation is different and we’ll evaluate each one on its own merits.

3. Are Amazon and Eucalyptus collaborating on APIs going forward, or will Amazon release APIs and let Euca use them? Also, will Eucalyptus have advance visibility into API development so they can release simultaneously?

We’re not disclosing the terms of our agreement.

4. Is Amazon helping Eucalyptus develop new features to be more compatible across the AWS portfolio?  Examples might include RDS, SimpleDB, EMR, Beanstalk, etc.  Without support for the PaaS layer components then Eucalyptus is only partly compatible and the migration between internal and external cloud would be restricted

No.  This relationship is about making workloads simple to move between on premise infrastructure and AWS.

5.  Does “As part of this agreement, AWS will support Eucalyptus as they continue to extend compatibility with AWS APIs and customer use cases” imply that Amazon’s Premier Support offerings will be extended to Eucalyptus so a customer can get support for both from Amazon?  Or is this more about the AWS team supporting the Eucalyptus team in their quest to maintain API parity?

AWS will be working with Eucalyptus to assure compatibility, but will not be supporting Eucalyptus customers or installations.  Support will be provided directly by Eucalyptus to their customers, just as was the case before this agreement.

6. Will Amazon resell Eucalyptus?

No.

7. Will Eucalyptus resell Amazon?

No.

8. Will Eucalyptus-based private clouds be visible/manageable through the AWS Management Console, or through CloudWatch?

The AWS management console does not support Eucalyptus installations.

9.  Is this exclusive or will Amazon be open to other similar partnerships?

It is not exclusive.

Not exclusive – that means Eucalyptus is not “the anointed one.”  No operational integration (e.g. CloudWatch, etc.) means that “common tools” in the answer to Q1 is RightScale, enStratus etc. Here’s a question I didn’t ask and, based on the answer to Q3 above, I would not expect to be answered — What did Eucalyptus commit to in order to get the license grant (which is the only news here)?

I’m going to go out on a limb here and speculate that the license grant applies to Eucalyptus only when deployed in a private cloud environment. It would be my expectation that Amazon would not want to legitimize any use of their APIs by service providers against whom they would compete. It’s not in Amazon’s best interest to make the AWS API an open standard that would enable public cloud-to-cloud compatibility.  Eucalyptus only targets on-premise private clouds so that would have been an easy give.

Okay, so how much does it matter that your private cloud has the same API as Amazon?  On the margin, I suppose it’s a good thing.  But RightScale and enStratus both do a great job of encapsulating multiple cloud APIs behind their management interfaces.  Unless I’m building my own automation layer internally to manage both AWS and my private cloud, then as long as the feature sets are close enough then the API does not have to be the same.

There’s some info about the Citrix Apache CloudStack project and AWS API, but I have no information that Amazon has granted Citrix or the Apache Foundation a license.  Will update you when I learn more.

All in all, this turned out to be not that interesting.  I like Marten and have no dog in this hunt, but I don’t think that this announcement in any way improves the long-term market for Eucalyptus.  And after the Citrix CloudStack announcement today, I would say that things are looking cloudier than ever for the Eucalyptus team.

Talking Cloud in the Enterprise

Despite the fact that cloud is part of the daily conversation in many enterprises, I still find a significant gap in many places in terms of a true understanding of that it means. This is somewhat compounded by the reliance on standard definitions of cloud computing from NIST and other sources. These definitions are helpful in some respects, but they are far more focused on attributes than on business value – and the business value is what is truly needed in the enterprise to break through the barriers to cloud computing.

First, let’s divide the enterprise IT landscape into three buckets:

  • Infrastructure & Operations: this is the core group in IT responsible for operating the data centers, running the servers, keeping the lights on, ensuring adequate capacity, performing IT governance, etc. We’ll call them I&O.
  • Applications: whether custom application development by a dev team, or a COTS application licensed in, or a SaaS app running externally, the applications are where the core value of IT is created. As a general rule, I&O exists to serve applications – not the other way around (though I think we can all come up with situations in our past where the nature of that relationship has not been so clear).
  • The Business: these are the users and application owners. Developers build apps for “the business” and I&O hosts them. Often times, especially in large enterprises, application development actually sits within the business under a business line CIO. So the app owners also control the application development and are the “customers” of I&O.

When talking about cloud, it’s really critical to have this context in mind. If you are talking to the business, they care about some very specific things related to their applications, and they have requirements that I&O needs to address. If you are talking to I&O, they have a related but very different set of issues and requirements and you need to address them in terms that are relevant to them. Let’s start with the business.

Talking Cloud to the Business

If you are speaking with the application owners within a business, they care about the following (generally unsatisfied) requirements with respect to their infrastructure and IT needs (the ordering is not important – it will be different for different businesses):

  1. Control – The pre-cloud world is one in which the business makes requests of I&O, often through an onerous and labor intensive service request workflow involving forms, approvals, emails, negotiation, rework, and more. The business puts up with this because they have to, or they just go outside and procure what they need from external vendors. As with many innovations, cloud computing first entered through the business and only later got adopted by IT. As a business, I really want to be able to control my IT consumption at a granular level, know that it will get delivered reliably and quickly with no errors, etc. This is the concept of “on-demand self-service.” Let me configure my requirements online, push a button, and get it exactly as I ordered with no fuss.
  2. Transparency – I heard a story once where a company had hired so many IT finance analysts that there were more people accounting for IT than actually producing it. It may be a myth, but I can see how it might actually happen. If I apply the management accounting principles of the shop floor to IT, I start to get into activity-based costing, very complex allocation formulas, etc. But even with that, it’s still viewed by the business as more of a black box than transparent. I sat with some IT folks from Massachusetts a couple years ago and they all groused about how costs were allocated – with the exception of one guy at the table who knew he was getting a good deal. Winners and losers, right?  What the business wants today is transparency. Let me know the cost per unit of IT, in advance, and give me control (see 1) over how much I consume and let me know what I’ve used and the costs incurred along the way. No surprises, no guess work, no hassle. In the NIST cloud world we call this “measured” IT.
  3. Productivity & Innovation– Pre-cloud I&O processes are often so onerous that they significantly impact developer productivity. If it takes me several meetings, days of analysis, and hours of paperwork to properly size, configure and formulate a request for a VM, that’s a huge productivity drain. Further, if I have to wait several days or even weeks before this VM is available to me, and I have to wait for it, that slows me down. At one financial institution I spoke with the VM request form as 4 packed pages long, required 12 approval steps, and each approval step had an SLA of 3 weeks. Yes, that’s a potential of 36 weeks to return a VM and still hit their SLAs to the business. In reality it never took 36 weeks – but it often took 6-10 weeks for a VM. Seriously, why can I just have a VM now, when I need it? That’s what the business wants. Related to productivity, innovation is seriously stifled in most enterprise IT environments. Imagine if I’m on a roll, have this great idea, but need a VM to test it. Then imagine a series of hurdles – sizing, configuration, paperwork, approvals and waiting!! Now, it may be a pretty cool idea, but unless it’s part of my top priority task list and was approved months ago, it just isn’t going to happen. The business wants support for innovation too. That means it wants speed. This is the concept of “elasticity” in IT. Give me as much as I want/need now, and when I’m done, you can have it back.
  4. Cost – Last but often not least, the business wants a smaller bill from IT – and the benchmark is no longer in their peer group. The benchmark is Amazon, Google, Microsoft, Rackspace and others. The benchmark is the cloud. Why pay $800/month for a virtual machine when Rackspace will rent it to me for $100? Not only does the business want better IT – more control, transparency, productivity, and innovation – but they also want it at a lower cost. Easy right?

When engaging with the business application owners about their cloud needs (you do this, right?), and they are having a hard time articulating what is important to them and why they want cloud, ask them if they want more control, transparency, productivity & innovation, and lower cost.  If they don’t really want most of this, then perhaps they don’t want or need a cloud.

Talking Cloud to IT Infrastructure & Operations (I&O)

In short, I&O really would like to satisfy the requirements of the business listed above. Remember that I&O’s mission is to serve the business by serving their applications. When talking with the I&O side of the house (make no mistake, there are at least 2 sides here), talk in terms of the requirements of the business. Yup – control, transparency, productivity & innovation, and cost.

How? Be a cloud provider to the business, of course. But what does that mean? So many people I meet still think that a self-service portal in front of a vSphere cluster is all that it means to be a cloud. It’s more than this – it’s a completely end-to-end automated operations model to deliver IT services. In order to meet all of the requirements above, including at a reasonable cost, everything that can be automated should be automated. So-called “enterprise clouds” that still require manual steps in the provisioning process cannot achieve the cost advantages of a fully automated environment (unless of course the cost of putting in the automation, divided by the number of units produced, is greater than the cost of doing it manually). This is no different than with the making of products. Even in many heavily automated mass-production businesses such as auto manufacturing, IT still done in a way where every VM and deployment landscape is an exception crafted by hand. That’s a huge waste!

Cloud computing operating models (cloud not a technology, it’s the application of many technologies to change the way IT is operated) grew out of necessity. How could Google, Amazon or other large scale web business possibly handle tens of thousands of servers without either armies of low cost workers or “extreme” automation? They could not, and neither can you even if your data center footprint is in the hundreds of server range. Clearly the automation route is less expensive in the long run, at least for the vast majority of tasks and actions that are performed in a data center every day.

Now enterprise IT gets to have many of the same techniques used by cloud providers applied in their own operations. With all of the software out there for building infrastructure (IaaS) and platform (PaaS) clouds, it’s never been easy to envision and implement the “IT Factory of the Future” in any sized environment. Take an OpenStack, BMC Cloud Lifecycle Management, VMware vCloud or other cloud stack and create your infrastructure factory. Then add Apprenda, Cloud Foundry, or one of dozens of other PaaS frameworks and and create your application platform factory If fully implemented and integrated, the variable labor cost for new units of IT production (VM, scaled front end, etc.) will approach zero.

Let’s take this to an extreme. Some in IT have titles like VP or director of infrastructure. They run the I&O function. Let’s give them a new title – IT plant manager if they run one data center. Or VP of IT production if they run all I&O. Even if you don’t go that route, perhaps that’s how these people need to see themselves going forward.

Related Posts

“Putting Clouds in Perspective – Cloud Redefined”

“Don’t Mention the Cloud”

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

 

How Some Journalists Confuse People About Cloud

Simon Wardley and I had a quick exchange about the sloppily written and factually inaccurate writing of Wired’s Jon Stokes. Simon commented about a November post on Wired CloudLine as follows:

@swardley:  “This Wired post on cloud from Nov ’11 – where it isn’t wrong (repeating unfounded myths), it is tediously obvious – bit.ly/wWLbsL”

I piled on and Simon posted about another post here.

@swardley: “Oh dear, another of the wired author’s articles – http://bit.ly/vHWPZW – is so full of holes, well, no wonder people are confused.”

Stokes replied here.

@jonst0kes:  “@cloudbzz @swardley And I’d like to think that one of you could write a real takedown instead of slinging insults on twitter.”

Challenge accepted.

Let me just start by stating the obvious – When a respected editor like Stokes at a very respected zine like Wired puts up crap, misinformation and rubbish, it just confuses everybody. If very knowledgable people like Simon Wardley are calling bullshit on someone’s weak attempt at journalism, then you can bet that something is not right.

Wired Cloudline Post by Jon Stokes – “An 11th Law of Cloudonomics

Stokes:  “Don’t build a public cloud–instead, build a private one with extra capacity you can rent out.”

I’m sorry, but if you’re renting out your cloud, it’s public – so you’re building a public cloud and you better damned well know what you’re getting into. Anybody who has a clue about building clouds knows that there are tremendous differences in terms of requirements and use cases – depending on the cloud, the maturity of your ops team, and a whole bunch of other factors. Yes, you can build a cloud that is dual use, but it’s rare and very difficult to reconcile the differing needs. I know of only one today – at it’s in Asia, not in the U.S.

Stokes: “If you look at the successful public clouds—AWS, AppEngine, Force.com, Rackspace—you’ll notice that they all have one thing in common: all of them were built for internal use, and their owners then opted to further monetize them by renting out excess capacity.”

Garbage!  Amazon’s Bezos and CTO Werner Vogels have repeatedly disputed this.  Here is just one instance that Vogels posted on Quora:

Vogels: “The excess capacity story is a myth. It was never a matter of selling excess capacity, actually within 2 months after launch AWS would have already burned through the excess Amazon.com capacity.”

Rackspace built their public cloud as a public cloud, and never had any internal use case that I can come up with (they’re a hosting company at their core – what would they have a private cloud for internally??). For private clouds, they actually use a very different technology stack based on VMware, whereas their public Cloud Servers is built on Xen. But again, their private clouds are for their customers, not for their own internal use.

Stokes: “It’s possible that in the future, OpenStack, Nimubla, and Eucalyptus will create a market for what we might called “AWS clones”—EC2- and S3-compatible cloud services that give you frictionless portability among competing clouds.”

Eucalyptus is the only stack that is remotely an AWS clone – and that’s how it started as a project at UC Santa Barbara. OpenStack is based on Rackspace and NASA Nebula – not AWS clones – and Nimbula is something built by former AWS engineers but is also not a clone. There are some features that are common to enable federation, but that’s hardly being a clone (we call it interoperability). And none of them give you frictionless portability between each other.

Stokes: “In that future, we could see a company succeed by building a public cloud solely for the purpose of making it an AWS clone.”

Huh? That’s about the least likely scenario for success I could dream of. If all I do is build an AWS clone to compete against Amazon with its scale, resources and brand, then I’m the biggest moron on the planet. That would be total #FAIL.

Stokes: “…attempts to roll out new public clouds and attract customers to them will fail because it’s too expensive to build a datacenter and then hang out a shingle hoping for drop-in business to pick up.”

Generally I agree with this, but not for the reasons Stokes gives. Most cloud providers don’t need to build a data center. You can get what you need from large DC providers (space, power, HVAC, connectivity) and build your cloud. But you need to have a reason for customers to consider your cloud, and the idea of “build it and they will come” is a truly lame strategy. I don’t know a single cloud provider today that is operating on that model.

Stokes: “And most cloud customers are drop-in customers at the end of the day.”

Most startups might “drop in” on a cloud. But most enterprises certainly are more mature than that. You don’t drop in on IBM’s cloud (which is pretty successful), or Terremark’s or Savvis’s. Gartner MQ upstart BlueLock is (a) not even remotely an AWS clone, (b) having really great success, and (c) does not want or allow “drop-in customers” at all (you need to call them and talk to a sales rep).

Going forward I expect better from Stokes and the folks at Wired.

 

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged

PaaSing Comments – Data and PaaS

I’ve been looking at the PaaS space for some time now.  I spent some time with the good folks at CloudBees (naturally), and have had many conversations on CloudFoundry, Azure, and more with vendors, customers and other cloudy folks.

Krishnan posted a very good article over on CloudAve, and at one level I fully agree that PaaS will be come more of a data-centric (vs. code-centric) animal over the next few years.  To some degree that’s generally true of all areas of IT – data, intelligence and action from data, etc.  But there is a lot more to this.

Most PaaS frameworks have very few actual services – other than code logic containers, maybe one messaging framework, and some data services (structured and unstructured persistence and query).  You get some scale out, load balancing, and rudimentary IAM and operations services.  Over time as the enterprise PaaS market really starts to take off, we may find that these solutions are sorely lacking.

In the data and analytics space alone there are many types of services that PaaS frameworks could benefit from:  data capture, transformation, persistence (have), integration, analytics and intelligence.  But this is too one-dimensional.  Is it batch or realtime, or high-frequency/low-latency?  What is the volume of data, how does it arrive and in what format? What is the use-case of the data services?  Is it structured or unstructured? Realtime optimization of an individual users’ e-commerce experience or month-end financial reporting and trend analysis?

Many enterprises have multiple needs and different technologies to service them.  Many applications have the same – multiple data and analytical topologies and requirements.  Today’s complex applications are really compositions of multiple workload models, each with its own set of needs.  You can’t build a trading system with just one type of workload model assumption.  You need multiple.

A truly useful PaaS environment is going to need a “specialty engine” app store model that enables developers to mix and match and assemble these services without needing to break out of the core PaaS programming model. They need to be seamlessly integrated into a core services data model so the interfaces are consumed in a consistent manner and behave predictably.

Data-centricity is one of the anchor points.  But so is integration.  And messaging.  And security in all it’s richness and diversity of need.

This gets back to the question of scale.  Salesforce has the lead, but they also have a very limiting computational model which will keep them out of the more challenging applications.  Microsoft is making strides with Azure, and Amazon continues to add components but in a not-very-integrated way.  But will a lot of other companies be able to compete?  Will enterprises be able to build and operate such complex solutions (they already do, but…)?

This is a great opportunity and challenge, and I have great expectations that we will be seeing some exciting innovations in the PaaS market this year.

Tagged

Cloudy Implications and Recommendations in Megaupload Seizure

The FBI seized popular upload site Megaupload.com yesterday.  They took the site down and now own the servers.

I am not an attorney, and I have no opinion on whether or not the MegaUpload guys were breaking laws or encouraging users to violate copyrights through illegal uploading and streaming of movies, recordings, etc.  Right or wrong, the FBI did it and now we need to deal with the fallout.

The challenge is that there were very likely many users who were not breaking any laws.  People backing up their music, photos, websites, documents and who knows what else.  I highly doubt any large corporations would want to use such a site, but I bet a lot of small businesses did.  My focus here is on the ramifications to the enterprise, and how to protect yourself from being impacted by this.

What if the offending site was using Amazon, Google or Microsoft to store their bad content?  I’m sure that the Feds would have had no problems getting the sites shut down through these companies without needing to resort to taking them offline.  But legally could they have gone in and seized the AWS data centers?  Or some of the servers?  Maybe legally, but perhaps not easily for both technical and legal reasons (Amazon has lots of money for lawyers…).

What if the cloud provider was someone smaller, without the financial ability to challenge the FBI?  I mean, those guys usually don’t call ahead — they just bust in the door and start taking stuff.  The point is that IT needs to take some steps that protect themselves from getting caught up in an aggressive enforcement action, legitimate or not.

Recommendations to IT

  1. Stick with larger, more legitimate vendors that have the ability to square up with the Feds when necessary – not that will stop them but it could slow them down enough to let you get your data
  2. Encrypt your data using your own keys so that even if your servers get taken, your data is secured (of course, that’s just a good idea in general)
  3. Back up your data to another cloud or your own data center.  Having all of your eggs in one basket is just stupid (and that goes for consumers who are more likely to just trust a single backup provider like Carbonite (who stated in their S1 offering docs that they expected to lose data and that the consumer’s PC was assumed to be the primary copy!)

Feds, Please Consider Doing it Differently

Perhaps we need some legislation to protect the innocent legitimate users from the enforcement fallout caused by people who are clearly breaking laws.  I don’t understand why, for example, the FBI could not have copied off all of the files, logs, databases etc. but left the site running.  Even watching the traffic that occurred after the announcement could have given the FBI some interesting insights into some of the illegal usage.

Bottom Line – protect yourself because this is a story that could be coming to your preferred cloud someday.

Tagged , ,

Cloud Stack Red Ocean Update – More Froth, but More Clarity Too

The cloud stack market continues to go through waves and gyrations, but increasingly now the future is becoming more clear.  As I have been writing about for a while, the number of competitors in the market for “cloud stacks” is totally unsustainable.  There are really only four “camps” now in the cloud stack business that matter.

The graphic below shows only some of the more than 40 cloud stacks I know about (and there are many I surely am not aware of).

VMware is really on its own.  Not only do they ship the hypervisor used by the vast majority of enterprises, but with vCloud Director and all of their tools, they are really encroaching on the traditional data center/systems management tools vendors.  They have great technology, a huge lead in many ways, and will be a force to reckon with for many years.  Many customers I talk with, however, are very uncomfortable with the lack of openness in the VMware stack, the lack of support for non-virtualized environments (or any other hypervisor), and a very rational fear of being monopolized by this machine.

Data Center Tools from the big systems management vendors have all been extended with cloud capability at use in both private and public clouds.  Late to the party, they are investing heavily and have shown fairly significant innovation in recent releases.  Given that the future of the data center is a cloud, this market is both a huge opportunity and an existential threat.  Deep hooks into the data center with service desks, service catalogs, automation and orchestration capabilities provide a near term protection.  There are just too many trained resources with too much invested for most IT organizations to just walk away.

Unlike the VMware approach, all of these vendors support a more heterogeneous environment – especially CA and BMC.  Most support some combination of Xen, KVM and Hyper-V in addition to VMware hypervisors.  They are also moving up-stack, supporting integration with public clouds such as Amazon and others, application-level functionality, and more.

OpenStack is the new 800-lb gorilla.  In less than 18 months OpenStack has emerged as the most vibrant, innovative and fast-moving segment of this market.  Evidence of progress includes contributed code from over 1,000 developers, more than 128 companies in the community, a growing list of commercial distributions from  incredibly smart teams, and a maturing technology base that is starting to gain traction in the enterprise. It’s still very early days for OpenStack, but it very much feels like the counterweight to VMware’s controlling influence.

The froth in this market is coming from increasing number of very cool (and occasionally well-funded) OpenStack commercialization efforts.  As with most markets, there will be winners and losers and some of these efforts will not make it.  This market is so new that whatever shakeout may occur, it won’t happen for a few years.

Other solutions are going to find the going tougher and tougher.  Some may be doing well and growing today, but ultimately the market will shake out as it always does and many of these current solutions will either find new use-cases and missions, or they will be shuttered. I have unconfirmed reports of at least two of the currently available stacks on my list being withdrawn from the market for lack of sales.  Is this the start of a “great cloud stack shakeout?”

Where are we heading?

The majority of the market in 3 years will have coalesced into three big buckets, and it’s starting to happen now.  vCloud, OpenStack and the big data center vendors will rule the roost at the core stack level going forward.  The graphic below is not intended to show the size of these markets.

The guys in the “other” category reading this post are probably not ready to hear the bad news, but this is what I believe to be the ultimate state. There will be niche survivors, some who will migrate to the OpenStack island (rumors abound), and others who may pivot to new markets or solution designs.  Some are just focusing on Asia, especially China, since it’s more of a wild west scenario and just showing up is guaranteed to generate deals.  However, many of them will have gone out of business by 2015 or be barely scraping by. Such is the nature of new markets.

One key distinction with the “big four” data center/systems management tools vendors is that they are not going to be the same kind of open and vibrant ecosystems as OpenStack or vCloud.  With their huge sales organizations and account presence, they don’t necessarily need the leverage that an ecosystem provides. Some in the #clouderati community might conclude that they are toast.  I’ve heard several say that there will be only two choices in the coming years, but I disagree and do think that the DC tools guys get it now and have a lot of money to invest.

I have this opinion based on spending most of my days working with large enterprises and governments who have millions invested in these vendors, and I expect a fair bit of enterprise cloud infrastructure – especially for their more mission-critical applications – to be real long-term opportunities for the big guys.  vCloud and OpenStack will certainly hurt them in their core markets, however, and there will be lots of pivots and new initiatives from these mega vendors to ensure their relevancy for a long time to come.

Bottom line?

The market is starting to form up, and it looks like there will be three big segments going forward (and a small market of “other”). If you’re not in one of them, and solidly so, you’re doing something else in a few years. There just won’t be enough revenue to support 40+ profitable and viable vendors.  How many will survive? That’s a tough question, but here’s my prediction for the market breakdown in 2018.

VMware:  1

OpenStack commercial distributions:  4 viable, 1 or 2 that are clear leaders

DC Tools:  4 main and a couple smaller guys

Other: at most 3, mainly in niche markets

Total:  12 viable cloud stack businesses in 2018

What do you think?

 

 

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

 

 

Don’t Mention the Cloud

"I mentioned it once, but I think I got away with it alright."

The “cloud” term has started to turn like the leaves on the trees outside my Window.  It’s yellowing, drying out and about to fall to Earth to be raked up and composted into fertilizer if something isn’t done to stop it.

Where once it was the magic phrase that opened any door, the term “cloud” is now considered persona non grata in many meetings with customers. When everything’s a cloud – and today “cloud washing” is an epidemic on an unprecedented scale – the term loses meaning.

When everything’s a cloud, nothing is.

In fact, not only does “cloud” mean less today than a year ago, what it does mean is not good. For many customers, “cloud” is just a pig with cloud lipstick.  And who’s fault is this?  It’s ours – all of ours in the IT industry.  We’ve messed it up – potentially killing the golden goose.

A Vblock is not a cloud (not that a Vblock is a pig). It’s just a big block of “converged infrastructure.” Whatever its merits, it ain’t a cloud. You can build a cloud on top of a Vblock, which is great, but with out the cloud management environment from CA, BMC, VMware (vCloud) or others, it’s just hardware.

A big EMC storage array is not a cloud either, but that doesn’t stop EMC from papering airports around the globe with “Journey to the Private Cloud” banners. Nothing against EMC.  And VMware too often still confuses your cloud state with what percent of your servers are virtualized.  Virtualization is not cloud.  Virtualization is not even a requirement for cloud – you can cloud without a VM.

A managed hosting service is not a cloud.

Google AdWords is not cloud “Business Process as a Service” as Gartner would have you believe. It’s advertising!  Nor is ADP Payroll a cloud (sorry again, Gartner), even if it’s hosted by ADP.  It’s payroll.  By their logic, Gartner might start to include McDonalds in their cloud definition (FaaS – Fat as a Service?). I can order books at Amazon and they get mailed to my house.  Is that “Book Buying as a Service” too?  Ridiculous!

And then there’s Microsoft’s “To the Cloud” campaign with a photo app that I don’t believe even exists.

It’s no wonder, then, that customers are sick and tired and can’t take it (cloud) anymore.  Which is why it’s not surprising when many customer “cloud” initiatives are actually called something else.  They call it dynamic service provisioning, or self service IT, or an automated service delivery model.  Just don’t use the “cloud” term to describe it or you might find yourself out in the street quicker than you can say “resource pooling.”

There’s also that pesky issue about “what is a cloud, anyway?” that I wrote about recently. For users, it’s a set of benefits like control, transparency, and productivity.  For providers, it’s Factory IT – more output at higher quality and lower cost.

When talking about “cloud computing” to business users and IT leaders, perhaps it’s time to stop using the word cloud and start using a less ambiguous term. Perhaps “factory IT” or “ITaaS” or some other term to describe “IT capabilities delivered as a service.”

No matter what, when speaking to customers be careful about using the “cloud” term.  Be precise and make sure you and your audience both know what you mean.

The Red Ocean of Cloud Infrastructure Stacks (updated)

Update: am revising this still… Reposting now – but send me your comments via @CloudBzz on Twitter if you have them.

It seems like every day there’s a new company touting their infrastructure stack.   I’m sure I’m missing some, but I show more than 30 solutions for building clouds below, and I am sure that more are on their way.  The market certainly can’t support so many participants!  Not for very long anyway.  This is the definition of a “red ocean” situation — lots of noise, and lots of blood in the water.

This is the list of the stacks that I am aware of:

I. Dedicated Commercial Cloud Stacks

II.  Open Source Cloud Stacks

III.  IT Automation Tools with Cloud Functionality

IV.  Private Cloud Appliances

I hope you’ll pardon my dubious take, but I can’t possibly understand how most of these will survive.  Sure, some will because they are big and others because they are great leaps forward in technology (though I see only a bit of that now).  There are three primary markets for stacks:  enterprise private clouds, provider public clouds, and public sector clouds.  In five years there will probably be at most 5 or 6 companies that matter in the cloud IaaS stack space, and the rest will have gone away or taken different routes to survive and (hopefully) thrive.

If you’re one of the new stack providers – think long and hard about this situation before you make your splash.  Sometimes the best strategy is to pick another fight.  If you swim in this red ocean, you might end up as shark bait.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

Putting Clouds in Perspective – Cloud Redefined

A Change of Perspective by kuschelirmel

You’d think as we head into the waning months of 2011 that there’d be little left to discuss regarding the definition of cloud IT.  Well, not quite yet.

Having spent a lot of time with clients working on their cloud strategies and planning, I’ve come to learn that the definition of cloud IT is fundamentally different depending on your perspective.  Note that I am using “cloud IT” and not “cloud computing” to make it clear I’m talking only about IT services and not consumer Internet services.

Users of cloud IT – those requesting and getting access to cloud resources – define clouds by the benefits they derive.  All those NIST-y terms like resource pooling, rapid elasticity, measured service, etc. can sound like gibberish to users.  Self-service is just a feature – but users need to understand the benefits.  For a user – cloud IT is about control, flexibility, improved productivity, (potentially) lower costs, and greater transparency. There are other benefits, perhaps – but these are commonly what I hear.

For providers – whether internal IT groups or commercial service providers – cloud IT means something entirely different.  First and foremost, it’s about providing services that align with the benefits valued by users described above.  Beyond that, cloud IT is about achieving the benefits of mass production and automation, a “factory IT” model that fundamentally and forever changes the way we deliver IT services.  In fact, factory IT (McKinsey blog) is a far better term to describe what we call cloud today when you’re talking to service providers.

Factory IT standardizes on a reasonable number of standard configurations (service catalog), automates repetitive processes (DevOps), then manages and monitors ongoing operations more tightly (management). Unlike typical IT, with it’s heavily manual processes and hand-crafted custom output, factory IT generates economies of scale that produce more services in a given time period, at a far lower marginal cost per unit of output.

Delivering these economies end-to-end is where self-service comes in.  Like a vending machine, you put your money (or budget) in, make a selection, and out pops your IT service.  Without factory IT, self service – and the control, transparency, productivity and other benefits end users value – would not be possible.

Next time someone asks you to define cloud, make sure you understand which side of the cloud they are standing on before you answer.

—-

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Tagged ,
%d bloggers like this: