Cloud is a great marketing concept. It creates an impression of something new and better. But is it really new and better, or is it for the birds, up there in Cloud Cuckoo Land?
- Running your IT in a third party data center is not new
- Paying someone else to run it is not new
- Financing your IT assets over a payment term is not new
- Buying specific services, like payroll, on a subcription is not new
What is better about it?
- Having your IT remote rather than local is no faster or cheaper than it was
- Paying someone else to run it is not better or cheaper than it was
- The cost of financing assets is not lower
- Running specific services, like e-mail, on a subscription has not become better or cheaper than it used to be
Of course it has always made sense to run some of your IT remotely, like a public website. Nothing has changed in the arguments for and against, so we don't need to repeat them.
Nothing has changed, either, in the basic economics. Remote data is still more expensive, by roughly the same factor as before. Having someone else do something for you is still generally more expensive than doing it yourself, or not doing it at all. Yes, there are economies of scale in data centers, but there always have been.
So what's up?
The significant change has been Virtualisation, or time-slicing of computing resources. Virtualisation is done by a tiny piece of software (maybe 300MB) that separates different workloads and their use of resources like CPU and memory. It is a kind of extended BIOS, nothing more. If the operating system allowed complete separation of workloads you would not need it.
Time-slicing enables the units of computing resource to be rented out. Up to now, the unit of resource has been physical. No matter how you finance a server, someone has to buy it and allocate it to a workload. With time-slicing the resource can be taken from a pool and put back when not used. You pay for the amount of resource used, and the level of guaranteed availability of the resource.
Of course you still have to have software to make use of the computing resource. It would be difficult if you had to buy the software as an asset, even if you were renting the CPU cycles. The Microsoft Service Provider Licensing Agreement (SPLA) allows a service provider to charge for usage rather than sell the license.
If we take something like Exchange, there are now three models for obtaining the service:
- Buy the hardware and software
- Subscribe to seats in a shared Exchange system
- Rent the hardware and software.
It does not really matter where you run it (on premise or at a third party data center) or who runs it (yourself or subcontracted). There is nothing new about those options. Of the three models above, only the last is new.
So why would you want to rent the hardware and software to run your own system, rather than buy it outright or subscribe to a service instead? You would need to want a custom dedicated service (otherwise subscribe) as well as flexibility to scale resources up and down inside a three year period (otherwise buy).
What rental achieves, which is quite valuable, is to remove the capital expenditure aspect of the decision making. With the physical server model it was easy. You had to buy a number of boxes and do some sizing. With the Virtualisation model the decisions are more complex. You need to create a virtualisation infrastructure as well as the traditional server infrastructure.
Services like Dropbox, iCloud, Google Apps, Office 365 are not technological innovations. Services like Windows Azure and Amazon S3 are technological innovations, because they enable you to rent computing resources rather than buy, using Virtualisation.
So is it Cloud, or Cloud Cuckoo Land? I think the question you need to ask as a CIO is "can we rent it?". That is what's new about Cloud.