Summary: Telco 2.0’s analysis of operators’ potential role and opportunity in ‘Cloud Services’, a set of new business model opportunities that are still in an early stage of development - although players such as Amazon have already blazed a substantial trail. (December 2010, , Executive Briefing Service, Cloud & Enterprise ICT Stream & Foundation 2.0)
See also the videos from IBM on what telcos need to do, and Oracle on the range of Cloud Services, and the Telco 2.0 Analyst Note describing Americas and EMEA Telco 2.0 Executive Brainstorm delegates' views of the Cloud Services Opportunity for telcos.
To share this article easily, please click:
Apart from being the leading buzzword in the enterprise half of the IT industry for the last few years, what is this thing called “Cloud”? Specifically, how does it differ from traditional server co-location, or indeed time-sharing on mainframes as we did in the 1970s? These are all variations on the theme of computing power being supplied from a remote machine shared with other users, rather than from PCs or servers deployed on-site.
Two useful definitions were voiced at the 11th Telco 2.0 EMEA Executive Brainstorm in November 2010:
The definition of Cloud has been rendered significantly more complicated by the hype around “cloud” and the resultant tendency to use it for almost anything that is network resident. For a start, it's unhelpful to describe anything that includes a Web site as “cloud computing”. A good way to further understand ‘Cloud Services’ is to look at the classic products in the market.
The most successful of these, Amazon's S3 and EC2, provide low-level access to computing resources – disk storage, in S3, and general-purpose CPU in EC2. This differs from an ASP (Application Service Provider) or Web 2.0 product in that what is provided isn't any particular application, but rather something close to the services of a general purpose computer. It differs from traditional hosting in that what is provided is not access to one particular physical machine, but to a virtual machine environment running on many physical servers in a data-centre infrastructure, which is probably itself distributed over multiple locations. The cloud operator handles the administration of the actual servers, the data centres and internal networks, and the virtualisation software used to provide the virtual machines.
Varying degrees of user control over the system are available. A major marketing point, however, is that the user doesn't need to worry about system administration – it can be abstracted out as in the cloud graphic that is used to symbolise the Internet on architecture diagrams. This tension between computing provided “like electricity” and the desire for more fine-grained control is an important theme. Nobody wants to specify how their electricity is routed through the grid, although increasing numbers of customers want to buy renewable power – but it is much more common for businesses (starting at surprisingly small scale) to have their own Internet routing policies.
So, for example, although Amazon's cloud services are delivered from their global data centre infrastructure, it's possible to specify where EC2 instances run to a continental scale. This provides for compliance with data protection law as well as for performance optimisation. Several major providers, notably Rackspace, BT Global Services, and IBM, offer “private cloud” services which represent a halfway house between hosting/managed service and fully virtualised cloud computing. And some explicit cloud products, such as Google's App Engine, provide an application environment with only limited low-level access, as a rapid-prototyping tool for developers.
Back at the November 2009 Telco 2.0 Executive Brainstorm in Orlando, Joe Weinman of AT&T presented an argument that cloud computing is “a mathematical inevitability”. His fundamental point is worth expanding on. For many cloud use cases, the decision between moving into the cloud and using a traditional fleet of hosted servers is essentially a rent-vs-buy calculus. Weinman's point was that once you acquire servers, whether you own them and co-locate or rent them from a hosting provider, you are committed to acquiring that quantity of computing capacity whether you use it or not. Scaling up presents some problems, but it is not that difficult to co-locate more 1U racks. What is really problematic is scaling down.
Cloud computing services address this by basically providing volume pricing for general-purpose computing – you pay for what you use. It therefore has an advantage when there are compute-intensive tasks with a highly skewed traffic distribution, in a temporary deployment, or in a rapid-prototyping project. However, problems arise when there is a need for capacity on permanent standby, or serious issues of data security, business continuity, service assurance, and the like. These are also typical rent-vs-buy issues.
Another reason to move to the cloud is that providing high-availability computing is expensive and difficult. Cloud computing providers' core business is supporting large numbers of customers' business-critical applications – it might make sense to pass this task to a specialist. Also, their typical architecture, using virtualisation across large numbers of PC-servers to achieve high availability in the manner popularised by Google, doesn't make sense except on a scale big enough to provide a significant margin of redundancy in the hardware and in the data centre infrastructure.
The key objections to the cloud are centred around trust – one benefit of spreading computing across many servers in many locations is that this reduces the risk of hardware and/or connectivity failure. However, the problem with moving your infrastructure into a multi-tenant platform is of course that it's another way of saying that you've created a new, enormous single point of commercial and/or software failure. It's also true that the more critical and complex the functions that are moved into cloud infrastructure, and the more demanding the contractual terms that result, the more problematic it becomes to manage the relationship. (Neil Lock, IT Services Director at BT Global Services, contributed an excellent presentation on this theme at the 9th Telco 2.0 Executive Brainstorm.) At some point, the additional costs of managing the outsourcer relationship intersect with the higher costs of owning the infrastructure and internalising the contract. One option involves spending more money on engineers, the other, spending more money on lawyers.
Similar problems exist with regard to information security – a malicious actor who gains access to administrative features of the cloud solution has enormous opportunities to cause trouble, and the scaling features of the cloud mean that it is highly attractive to spammers and denial-of-service attackers. Nothing else offers them quite as much power.
Also, as many cloud systems make a virtue of the fact that the user doesn't need to know much about the physical infrastructure, it may be very difficult to guarantee compliance with privacy and other legislation. Financial and other standards sometimes mandate specific cryptographic, electronic, and physical security measures. It is quite possible that the users of major clouds would be unable to say in which jurisdiction users' personal data is stored. They may consider this a feature, but this is highly dependent on the nature of your business.
From a provider perspective, the chief problem with the cloud is commoditisation. At present, major clouds are the cheapest way bar none to buy computing power. However, the very nature of a multi-tenant platform demands significant capital investment to deliver the reliability and availability the customers expect. The temptation will always be there to oversubscribe the available capacity – until the first big outage. A capital intensive, very high volume, and low price business is the classic case of a commodity – many operators would argue that this is precisely what they're trying to get away from. Expect vigorous competition, low margins, and significant CAPEX requirements.
To download a full PDF of this article, covering...
...Members of the Telco 2.0TM Executive Briefing Subscription Service and the Cloud & Enterprise ICT Stream can read the Executive Summary and download the full report in PDF format here. Non-Members, please email firstname.lastname@example.org or call +44 (0) 207 247 5003 for further details.