Data centre density
9 March 2015 | 0
“That’s probably the downside of virtualisation. If you look at the amount of density you can get with virtual machines, your ability to provision these services is so simple that it’s easy to just ramp them up and that in turn is creating demand from customers and IT departments.”
Cody believes that a further management complication to come from data centre density is the issue of sprawl. When provisioning virtual machines is relatively simple, it’s easy for users to forget that there’s a real world cost associated with running them.
“In our experience, companies are becoming more concerned with preventing sprawl by making sure they have in place tight controls over what gets provisioned, who signs off on services and how long are they meant to be provisioned for. It’s so easy to provision a virtual machine now, that unless you put in place controls, you can end up with a huge amount of your compute power or storage being consumed for applications or services that aren’t core to the company’s requirements,” he said.
On top of this, it is common for customers to spin up development or test boxes, and then forget they have done so.
“We also have incidents of customers doing this, not monitoring their systems properly and only later finding out they’re still consuming resources in the data centre. For this reasons, we’ve also seen organisations start to apply notional charges against storage and VMs to try to create more responsibility around it.”
A good way to protect against this density sprawl is to put in place regular reviews and audits, as well as automated date stamping on the provision of test environments so that they disappear after a set period of time, unless a valid business case is presented for allowing them to continue.
Rory Choudhuri, senior product marketing manager for infrastructure with VMware, suggested that a second wave of virtualisation is taking place, and that this is being driven by hardware innovations.
“In the early days of virtualisation, companies would go to three or four VMs per host and now we’re seeing a repeat of that consolidation but this time, it’s 25 to 30 VMs per host. This is possible because server vendors are making their hardware ever more powerful,” he said.
VMware said that it has been aware of the trend towards data centre density for some time, and in fact has been laying the groundwork necessary to make it more manageable for its users since before it was an issue.
“Five or six years ago we saw this coming and invested in operations management tools, including buying half a dozen different companies, effectively to create what we now call ‘de-realised operations’. The point of this is that we know that once a user gets above 50 or 60 VMs it becomes very hard to easily oversee them anymore and you start to need tools to make that easier,” said Choudhuri.
The premise of increased density and increased complexity is an issue facing anyone trying to deal with such a situation, unless they have the tools to manage it.
“From vCentre on upwards, through the management stack, we think it’s important that this is integrated right in. This allows customers to answer questions like ‘are we managing our resources correctly?’ We know from research that people tend to overprovision left, right and centre and that’s why we sell vSphere with operations management built in, purely because we recognise this as a potential issue,” said Choudhuri.