Enterprise storage dip can be combatted
Longform
(Image: Stockfresh)
| Flexibility and agility |
| “Automating the data centre functions enables IT to react to new business and application requirements in a much more flexible and agile manner” |

Asystec : Victor Forde |
| The software defined data centre aims to break down these application solution silos by abstracting the functionality of all the hardware components and pools compute, networking, and storage resources with the objective of automating these data centre functions. This enables IT to react to new business and application requirements in a much more flexible and agile manner.What VMware vSphere has done for the compute layer needs to follow for the network and storage layers respectively. Specifically from a storage perspective, business owners and IT departments alike do not want to see existing investments made redundant. EMC ViPR software only solutions specifically addresses this concern while still enabling IT to respond and meet requirements effectively. ViPR 2.0 has the ability to abstract separate heterogeneous storage systems including third party and commodity, pool and automate provisioning policy tasks with its Control Layer. Once this is in place ViPR data services for Block, File, Object and HDFS can be requested for the Virtual Storage arrays regardless of whether the original system was capable of providing these services. This way business get to utilise existing infrastructure while enabling system administrators to manage multi-vendor storage solutions without the added complexity of need to be a specialist in these systems.At Asystec, we see other customers not weighed down with the problem of managing SAN or NAS Based storage solutions are increasingly looking at Internal Direct Attach Storage (DAS) Distributed Storage solutions whereby the design is aimed at removing the need for a central network based storage array solution. VMware with VSAN and EMC with ScaleIO have developed software defined storage solutions that utilise internal disk devices on existing servers pooling the distributed storage across these servers aggregating storage performance and capacity without the need for centralised storage array. This has enabled Asystec’s customers to ensure that underutilised local storage in existing servers can contribute to the overall shared storage environment.
|
| Addressing fears |
| “Organisations are always risk averse when it comes to primary storage and unless their fears are assuaged by the storage players they trust, then uptake will be limited to the fringes” |

CMS Distribution : Tom Keane |
| Leveraging existing assets is always something an organisation should look to do, provided its needs are still being met by those assets. That may mean repurposing existing assets while replacing those assets with solutions that cater for changing requirements. It makes sense to use what you’ve got, where possible.Software defined storage (SDS) is getting a lot of air play but its market penetration is likely to be very limited unless the major storage vendors make a push in that direction. Organisations are always risk averse when it comes to primary storage and unless their fears are assuaged by the storage players they trust, then uptake will be limited to the fringes. If all you’ve got is a hammer, then everything begins to look like a nail. If you’re a vendor whose portfolio is based around SDS, then of course SDS is the solution to everything.What is interesting is that after the developments in Storage Area Networks over the years, where centralised storage is shared out and ‘islands’ of direct attached storage were eliminated, that we are now seeing a market move back in that direction again. Many vendors are now pushing the ‘islands’ of storage concept again to meet specific requirements, VDI for example, rather than a centralised storage solution capable of supporting all of the Enterprises’ needs. To my mind that’s a retrograde step. A primary storage solution within an enterprise should be capable of supporting the whole enterprise not just bits of it. All the original reasons for Storage Area Networks are still valid.
|
| Storage inflection point |
| “When planning for the future, organisations should consider whether they want to continue managing archaic and unwieldy storage abstracts such as LUNS, RAID sets and arrays when the modern data centre really revolves around virtual machines, hypervisors and applications” |

Data Solutions : Francis O’Haire |
Enterprise storage technology has certainly reached an inflection point.The traditional storage vendors seem to be struggling to maintain revenues while the market is awash with young storage technology companies who are biting at their heels. So, how did we get here?Over the past fifteen years, the compute layer in our data centres has been revolutionised thanks to virtualisation and advances in processor performance. As our servers have become faster and the number of virtual workloads they run has mushroomed, the storage layer has been left struggling to keep up. Most businesses are still running storage area network architectures which come from an era when servers were dedicated to a single application and those workloads did not move around on the infrastructure.
Nowadays every server typically hosts many virtual environments which all need high speed access to their storage; even while they flit from server to server or data centre to data centre. The dynamic nature of virtualised infrastructure combined with high virtual machine density has created a massive bottleneck, lack of flexibility and very high complexity within the storage tier. This is exacerbated when new projects such as Big Data and Virtual Desktop Infrastructure come along and demand even more performance than the norm.
If you peek behind the scenes at the data centres of the truly web-scale companies such as Google and Facebook, you will find a very different architecture in use. These companies realised early on that they could not scale their infrastructure as efficiently or as quickly as required if hamstrung by legacy storage architectures. While very few organisations are in a position to re-invent the wheel as these large household names have done, innovative technology companies such as Nutanix are bringing the capabilities of web-scale IT to the rest of us.
Having taken inspiration (and some of the best minds) from Google and its highly distributed storage architecture, Nutanix converges the compute and storage layers to eliminate complexity, bottlenecks and un-predictable cost from the data centre. Scaling the data centre using Nutanix is as easy as adding one small hyper-converged node at a time with each node adding its compute and flash-accelerated storage resources to the pool. Data automatically moves across nodes to keep it close to the virtual machine which needs it while also retaining copies on other nodes for redundancy. As each node also functions as a storage controller, adding more increases the overall storage performance and avoids any bottlenecks even for the most demanding workloads.
When planning for the future, organisations should consider whether they want to continue managing archaic and unwieldy storage abstracts such as LUNS, RAID sets and arrays when the modern data centre really revolves around virtual machines, hypervisors and applications. As a hypervisor agnostic platform for virtual computing, Nutanix allows IT to easily scale on demand in order to address changing business needs without the complexity and cost associated with expensive SAN’s and first generation converged systems.
|
Read More: enterprise storage Inside Track longform
Subscribers 0
Fans 0
Followers 0
Followers