Servers

Data centre density

Longform
Image: Stockfresh

9 March 2015

Doing more with less is an idea that has never been hard to sell to the corporate world. So when it comes to the benefits of virtualisation and software-defined utility in the data centre, the fact that more can be squashed into less rack space seems like a win-win situation.

Data centre density seems to offer the holy grail of efficiency — more utility from less space, with less running costs and easier management. But what are the real world effects of this increased density?

Tom_Long_Cisco_web

“A lot of today’s equipment packs a surprisingly powerful punch for its size — the so called Barry McGuigan effect. That’s come about because of innovation in technology around compute and networking producing phenomenal performance in a small area. And from a management point of view, that’s not that big a deal I don’t think. But it does create challenges around power,” Tom Long, Cisco

Can the IT department really have its cake and eat it too, cutting costs, increasing efficiency and future proofing against new developments all at the same time? And what about manageability? Are organisations struggling to get the promised performance and efficiencies from ultra-dense environments?

Tom Long, head of technical strategy for Cisco, said that the answer to this question depends on where the asker is standing.

Point of view
“We make a distinction between the enterprise data centre, where a company is running its own data centre, and managed service data centres where a company is renting space in someone else’s. The economics change slightly between them,” he said.

“The managed service data centre is usually paid for on a per-rack or half-rack basis and it’s very clear what the cost savings of higher density are in that kind of situation. A lot of the time in the private enterprise data centre,  space isn’t always the main challenge when you look at the fundamental elements of space, power and cooling whereas in a public data centre, where you’re paying for space, that’s different.”

According to Long, in public data centres, it is becoming common to find increasingly ultra-dense environments largely driven by technical innovation.

“A lot of today’s equipment packs a surprisingly powerful punch for its size — the so called Barry McGuigan effect. That’s come about because of innovation in technology around compute and networking producing phenomenal performance in a small area. And from a management point of view, that’s not that big a deal I don’t think. But it does create challenges around power.”

Long offers the example of a US-based Cisco data centre where the company replaced a single rack of older low density compute with a new high density compute system and found it quadrupled the power consumption for that rack.

More power
“That’s not because our high compute environment is not power efficient. It’s actually extremely efficient, and has been independently assessed as such, but because the density it provides sucks more power.”

“Likewise, when you have a high concentration of power it creates cooling challenges and that’s just a fact. Keeping high density systems cool is a real challenge but this is now a focus area for a lot of innovation.”

According to Long, it’s not uncommon for Cisco customers to tell it that they cannot fill racks with equipment because they can’t power or cool them.

“It’s a huge area of interest for us. In industry, performance metrics can change over time. Right now, people are very interested in performance-per-watt, or how many gigabits-per-watt is that device? Performance versus power requirement is a hugely important metric,” he said.

Read More:


Back to Top ↑

TechCentral.ie