Virtualisation technology has undoubtedly been one of the single biggest IT game changers of recent times. However over the last year, one aspect of the virtualisation proposition has come under increasing scrutiny-how the big name providers charge for their products.
A number of major players in the area have abruptly changed the way they charge out for their products, and with cost savings a significant driver behind the uptake of virtualisation, it’s not surprising that uncertainty and confusion in this area are making some potential converts nervous.
Part of the problem stems from the various methods used to charge for virtualisation platforms-some providers charge per CPU, some per virtual RAM allocation, and some per physical box. A cynical view of the situation might suggest that as virtualisation becomes more popular, and hardware evolves to allow more virtual servers to run on fewer physical boxes, it doesn’t make much sense for the software facilitating these changes to charge per physical box.
Licensing model
"That’s not entirely accurate. It’s correct that we’ve changed our licensing model with the role out of vSphere 5, but what we’ve done is simplify and streamline the way we do it," said Fredrik Sjostedt, director for EMEA product marketing with VMware.
"Right now, if you compare on a like for like basis the way we charge with the way we used to charge, it’s equivalent from a pricing perspective. Ninety five per cent of customers will see no difference before and after. There are actually some customers who will experience a positive impact, they’ll pay less, and there are some that will experience a negative impact, those companies with extremely high consolidation ratios, with maxed out servers. But those are a tiny number of customers, and we work them on a case by case basis."
Earlier this year, VMware was forced into an awkward climb-down on its new pricing strategy, when concerned customers objected to newly-installed virtual memory limitations which would require users to purchase more licenses than before. In the face of this criticism, the company increased the amount of virtual memory usable per license to the point where most customers were no longer affected.
"In the past we had a per CPU licensing model, but we also had a physical limitation with that, depending on the version of either six cores or twelve cores per processor. There was also a physical memory limit-you couldn’t install more than a certain amount of memory for that license. There were physical constraints," said Sjostedt.
"But if you look at developments on the server side of things, that looked like it was going to accelerate license costs for our customers much, much quicker. In 2012, companies like Intel and AMD are showcasing 12 core processors, 24 core, 48 core, 96 core and so on. Faced with that, our licensing model simply wouldn’t scale. Just by upgrading to an updated server, our customer’s costs would soar."
"That’s why we removed all those physical limits, maintained the way we did it on a per-socket basis but moved towards a VRAM base model that’s really consumption-based. You could argue that today versus yesterday, it’s swings and roundabouts-the cost more or less stays the same. But if you extrapolate the old way out into the future, things would have become much more expensive much faster and that clearly wasn’t customer friendly."
Different models
One source of confusion for companies looking at the virtualisation model is the fact that most of the major players price their platforms in completely different ways. Microsoft, for example, doesn’t use a Virtual RAM measure, but rather prices based on the number of physical processors in each physical server.
"You buy a Windows Server data centre license for each processor in each box. Once you’ve done that, you can virtualise as much as the box will support-an unlimited number of virtualised instances of Windows Server, once you license the physical box they’re running on," said Ronan Geraghty, server business group lead for Microsoft Ireland.
"There’s still a client access license (CAL) model in play-we still talk about servers and CALs. But in terms of the number of virtual instances running things, like SQL server or Exchange or whatever, you can run an unlimited number of virtual machines on a physical box once you’ve licensed the individual processor. If you have 1,000 users, pre and post virtualisation, your CAL situation should be the same."
Geraghty said that in Microsoft’s case, the changes that have taken place concern how the server components are licensed.
"We’ve gone for a simplified approach, with unlimited virtualisation rights. It’s not tied to the number of virtual machines you’re using, unlike some other providers that charge per virtual machine," he said.
"If you look at how things are going, then the future trends appear to point to a situation where virtual machine usage rates are going to increase dramatically. If you look at the number of cores per CPU and the number of sockets per server, there are significant implications for future growth. These have increased dramatically over the last few years, and that’s not going to stop. That will drive down infrastructure costs."
100% virtual
According to the resellers bringing these pricing models to the market, the most important consideration from the consumer company’s point of view remains how to get a clear overview of licensing costs if they decide to virtualise 100 per cent of their infrastructure. Once that step is taken it is very difficult to pull back from it, and companies using virtualisation are committed to the technology.
"That’s certainly the question we’re being asked. Software licences play a big role in this equation," said Ben McGahon, director of Comsys. "The various vendor offerings are priced differently, but figuring out which one is more or less expensive depends very much on where you’re starting from. If someone is looking at virtualisation and for example they’re an Oracle house and are thinking of going to VMware, then there are some difficult questions regarding support. It’s supported to a point, but only to a point."
"Virtualisation brings lots of benefits in terms of consolidation, but if you’re running an Oracle environment and you want to use VMware, and you have a problem, then in many cases, you have to recreate your set-up on a physical server to get support for it. But if you’re looking at running Oracle systems with Oracle VM, that changes things a lot, and allows customers quite a lot of flexibility."
Homogenous stack
According to McGahon, Oracle recognises its own virtualisation as a hard partition within the server. "That allows people to buy bigger systems, but only hard partition and provision certain CPUs that are affected by license costs. They can then license as they grow by opening up capacity as it’s required," he said.
By contrast, companies using Oracle’s licensing model on a HP or IBM server are obliged to license all the CPUs that are present in the server, whether they’re being used or not.
"This defeats a lot of the point of virtualisation. Oracle’s take on this is that their virtualisation runs best on their technology and they can offer better service levels if you stick to that. Of course, there’s a counter argument-there’s always a counter argument in these areas," McGahon said.
He believes that it is probably safe enough for companies to take a risk on future licensing costs in the area of virtualisation, and that as long as the changeover is specified correctly, costs should remain predictable.
"VMware is getting quite a bad rep right now because of its licensing, and there are lots of people having a go at them for what’s perceived to be excessive licensing. And that’s fair enough-their licensing is more expensive in vSphere 5 than in vSphere 4. But what’s not so obvious is that the other providers have hidden costs that aren’t obvious in direct comparison," said McGahon.
"Microsoft says it will give you a free Hypervisor if you buy Windows 2008. You get one physical and four virtual licenses-so you can have five instances of Windows running and get your virtualisation for free. That sounds fantastic, but in our experience their consolidation can be abysmal at times due to the way their virtualisation stack is written."
"There’s also a management issue. In order to properly manage a Microsoft HyperV environment, you need probably three or four of their System Centre products. Also, when you compare it to VMware, your physical consolidation isn’t as high. When you take into account power and cooling, the more physical infrastructure you have, the more those costs are. So the consolidation ratios of the various packages the virtualisation vendors out there offer are important. Sometimes free isn’t free, even if the package might look cheaper."
Growth perspective
For those companies considering virtualising their IT infrastructure, a key question is how the technology will scale to accommodate future growth, and significantly, how will licensing costs increase as the company’s dependence on the technology grows. Unfortunately, it’s quite hard to answer this kind of question in a simple way, according to VMware’s Fredrik Sjostedt.
"It’s extremely difficult to say how costs scale. It depends on your environment, how much physical infrastructure you have today, how much you have virtualised and how much you are planning to virtualise. But if we go back and look at virtualisation, it remains the case that a lot of the big savings are on the capital expenditure side," he said.
"You might be an organisation with 20 or 40 servers and so on-the big issue from a licensing cost point of view is that when you go to refresh your servers, you might consolidate down to five or ten much bigger servers. There will be fewer of them. The licensing cost isn’t going to be insignificant, but it will be less relevant."
"But the key benefit you will get out of virtualised infrastructure is the scalability and the opportunity to shave operational expense out of your infrastructure. Take something as straightforward as business criticality or disaster recovery. Right now, if you have a physical, rather than virtual, environment and you want to be able to continue working through a disaster, you need to replicate the same physical set-up waiting in the wings, just in case. In the virtualised environment, you don’t need to do that."
Cloud influence
So where is this style of licensing likely to take the industry? It’s hard to say for sure, but it’s not hard to take a guess, particularly when the potential of cloud technology becomes more fully realised.
"Software licensing is a big deal when it comes to selling virtualisation, in particular when it comes to long term planning. As you can fit more virtual servers onto less physical boxes, you are getting more for less, and potentially the software manufacturers can lose some profitability and revenue as a result," said Patrick O’Neill, senior corporate account manager for MJ Flood.
"However as we move towards the cloud and the idea of IT as a utility, that will probably be based on virtual licensing with customers licensing per server in the cloud. If that does happen, and we all end up being charged per server, then these changes we’re seeing now start to look like some of the providers are laying the groundwork to move the industry in that direction. I would imagine, and it’s an assumption on my behalf, that other vendors might adopt similar models."
O’Neill said that traditionally, customers think of software licenses as being tied to hardware, so when they bought a server or a desktop, Windows 7 or Vista would come installed as OEM software and would be tied to that piece of hardware.
"Because of the changes that are presented by virtualisation, operating system and applications are no longer hardware-dependent-they can be moved around and changed. To do that, you need to have a licensing model rather than the older OEM model. In the old days, one server equalled one license, but today you could have 20 or 30 virtual servers running off one box, so obviously something has to give."




Subscribers 0
Fans 0
Followers 0
Followers