Monthly Archives: January 2011

BMC Cloud Lifecycle Management

As I’ve written about previously, there are many tools in the market for building clouds – whether private or public. There are too many, in fact, and it will be hard to see most of them still around after the next five years. BMC is in a very strong position with both enterprises and large managed service providers – and I would expect them to be one of the survivors based on scale and reach, if for no other reason.

BMC has been hard at work on their Cloud Lifecycle Management (CLM) offering and recently announced their 1.5 release. CLM is a solution built on top of a good chunk of the BMC tools suite – including BladeLogic, Atrium CMDB, Atrium Orchestrator, and more. It’s an approach similar to most of the large IT automation vendors – IBM, CA and HP included – and we have used this model at Unisys as well.

I got a chance to catch up with one of BMC’s cloud product marketers, Lilac Shoenbeck. According to Lilac, CLM 1.5 is a substantial upgrade, with the primary focus on making it easier for cloud administrators to configure and manage their cloud – including the service catalog and rules. They are also focused on providing out-of-the-box functionality for key cloud use-cases: dev/test, big data analytics, Web hosting, etc.

One of the core principals is around the service catalog management. Rather than specifying a fixed and inflexible catalog, BMC likened this to an ice cream parlor where you have a set of base (cone), middle (ice cream) and top (toppings) components and allow the user a fair bit of latitude in creating their environment.

CLM is not a pure play cloud environment like Eucalyptus or Cloud.com, which means it comes with a set of tradeoffs to support the legacy BMC tool set.  This gives them a lot of powerful technology underneath, but anytime you need to integrate a bunch of stuff under the covers – especially stuff that came through acquisition – it makes for a fair bit of complexity.  The pure-play guys have more maneuvering room for innovation – which is how they like it.  However, these tools don’t run in a vacuum and there can be substantial work to integrate it into existing automation environments.

CLM does provide a high level of automation for both physical and virtual environments has the advantage of a large enterprise sales force to bring it to market.  Tight integration with Cisco switches and UCS, deep integration with a leading CMDB, and an extensive hardware support matrix are all positives for CLM.  I believe they also ship today with support for multiple hypervisors (VMware, Xen, Hyper-V, KVM).

Another area they tote is around business service management and workload placement.  Once a workload is placed into the cloud, rules can be used to move it, scale it, etc. based on business transactions performance and other factors.  There’s a fair bit of work to get this right, so time will tell if they have gotten it working well.

It is missing some key elements right now – most notably a public cloud API out of the box (you can write your own against the internal APIs of CLM). It’s also not open source – which I have also written about – and is going to be fairly complex to set up initially. The user portal is also fairly basic and targeted at enterprise users, not the Web/SMB market, though you can use their internal APIs to create your own portal if desired.

Pricing was not disclosed, but they do have both usage-based and perpetual license-based models. The usage-based pricing is particularly key to the service provider space, though apparently some enterprises are also using this model. You’d expect BMC to be priced quite a bit higher than the pure-play market, though I am led to believe that they can be very aggressive to win deals.

BMC CLM is a credible and reasonably well positioned offering from a traditional ITA tools vendor.  If you are already a big BladeLogic user, or you want a cloud solution from a mainstream data center automation tools vendor, CLM is a strong offering.  If you tend towards open source tools in your data center and are focused on leveraging new innovations, this might not be the best fit.

A Vision of the Future Cloud Data Center

A new year is often a time for reflection on the past and pondering the future.  2010 was certainly a momentous year for cloud computing.  An explosion of tools for creating clouds, a global investment rush by service providers, a Federal “cloud first” policy, and more.  But in the words of that famous Bachman Turner Overdrive song — “You ain’t seen nothin’ yet!”

In fact, I’d suggest that in terms of technological evolution, we’re really just in the Bronze Age of cloud.  I have no doubt that at some point in the not too distant future, today’s cloud services will look as quaint as an historical village with no electricity or running water.  The Wired article on AI this month is part of the inspiration for what comes next.  After all, if a computer can drive a car with no human intervention, why can’t it run a data center?

Consider this vision of a future cloud data center.

The third of four planned 5 million square foot data centers quietly hums to life.  In the control center, banks of monitors show data on everything from number of running cores, to network traffic to hotspots of power consumption.  Over 100,000 ambient temperature and humidity sensors keep track of the environmental conditions, while three cooling towers vent excess heat generated by the massively dense computing and storage farm.

The hardware, made to exacting specifications and supplied by multiple vendors, uses liquid coolant instead of fans – making this one of the quietest and most energy-efficient data centers on the planet.  The 500U racks reach 75 feet up into the cavernous space, though the ceiling is yet another 50 feet higher where the massive turbines draw cold air up through the floors.  Temperature is relatively steady as you go up the racks due to innovative ductwork that vents cold air every 5 feet as you climb.

Advanced robots wirelessly monitor the 10GBps data stream put off by all of the sensors, using their accumulated “knowledge and experience” to swap out servers and storage arrays before they fail. Specially designed connector systems enable individual pieces or even blocks of hardware to be snapped in and out like so many Lego blocks – no cabling required.  All data moves on a fiber backbone at multiple terabytes per second.

On the data center floor, there are no humans.  The PDUs, cooling systems and even the robots themselves are maintained by robots – or shipped out of the data center into an advanced repair facility when needed.  In fact, the control center is empty too – the computers are running the data center.  The only people here are in the shipping bay, in-boarding the new equipment and shipping out the old and broken, and then only when needed.  Most of these work for the shippers themselves.  The data center has no full-time employees.  Even security and access control for the very few people allowed on the floor for emergencies is managed by computers attached to iris and handprint scanners.

The positioning and placement of storage and compute resources makes no sense to the human eye.  In fact, it is sometimes rearranged by the robots based on changing demands placed on the data center – or changes that are predicted based on past computing needs.  Often this is based on private computing needs of the large corporate and government clients who want (and will pay for) increased isolation and security.  The bottom line – this is optimized far beyond what a logical human would achieve.

Tens of millions of cores, hundreds of exabytes of data, no admins.  Sweet.

The software automation is no less impressive.  Computing workloads and data are constantly optimized by the AI-based predictive modeling and management systems.  Data and computing tasks are both considered to be portable – one moving to the other when needed.  Where large data is required, the compute tasks are moved to be closer to the data.  When only a small amount of data is needed, it will often make the trip to the compute server.  Of course, latency requirements also play a part.  A lot of the data in the cloud is maintained in memory — automatically based on demand patterns.

The security AI is in a constant and all-out running battle with the bots, worms and viruses targeting the data center.  All server images are built with agents and monitoring tools to track anomalies and attack patterns that are constantly updated.  Customers can subscribe to various security services and the image management system automatically checks for compliance. Most servers are randomly re-imaged throughout the day based on the assumption that the malware will eventually find a way to get in.

Everything is virtualized – servers, storage, networking, data, databases, application platforms, middleware and more.  And it’s all as a service, with unlimited scale-out (and scale-in) of all components.  Developers write code, but don’t install or manage most application infrastructure and middleware components.  It’s all there and it all just works.

Component-level failure is assumed and has no impact on running applications.  Over time, as the AI learns, reliability of the software infrastructure underlying any application exceeds 99.999999%.

Everything is controllable through APIs, of course.  And those APIs are all standards-based so tools and applications are portable among clouds and between internal data centers and external clouds.

All application code and data is geographically dispersed so even the failure of this mega data center has a minimal impact on applications.  Perhaps a short hiccup is experienced, but it lasts only seconds before the applications and data pick up and keep on running.

Speaking of applications, this cloud data center hosts thousands of SaaS solutions for everything from ERP, CRM, e-commerce, analytics, business productivity and more. Horizontal and vertical applications too.  All exposed through Web services APIs so new applications – mashups – can be created that combine them and the data in interesting new use cases.  The barriers between IaaS, PaaS and SaaS are blurred and operationally barely exist at all.

All of this is delivered at a fraction of the cost of today’s IT model.

Large data center providers using today’s automation methods and processes are uncompetitive. Many are on the verge of going out of business and others are merging in order to survive.  A few are going into higher-level offerings – creating custom solutions and services.

The average enterprise data center budget is 1/10th of what it used to be. Only the applications that are too expensive to move or otherwise lack suitability for cloud deployment are still in-house managed by an ever-dwindling pool of IT operations specialists (everybody else has been retrained in cloud governance and management, or found other careers to pursue).  Everything else is either a SaaS app or otherwise cloud-hosted.

Special-purpose clouds within clouds are easily created on the fly, and just as easily destroyed when no longer needed.

The future of the cloud data center is AI-managed, highly optimized, and incredibly powerful at a scale never before imagined.  The demand for computing power and storage continues to grow at ever increasing rates.  Pretty soon, the data center described above will be considered commonplace, with scores or even hundreds of them sprinkled around the globe.

This is the future – will you be ready?

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

Tagged
%d bloggers like this: