Traditional fightback may be tough

Pro
(Image: Stockfresh)

10 December 2015

Scaling decider
“With the explosion of data and application growth, the storage architecture’s ability and approach to scale will be a deciding factor in the customer’s procurement choice” Victor_forde_Asystec_web

Asystec Victor Forde

In short one word — Workloads! An organisation’s application workload will dictate what solution architecture is required. One needs to analyse the application workload from a performance, capacity and IO profile perspective as well as determining whether applications are resource dependent on CPU and/or Memory. Only after this analysis can the most appropriate architecture be decided.

Indeed, with the explosion of data and application growth (which show no sign of abating) the storage architecture’s ability and approach to scale will be a deciding factor in the customer’s procurement choice. In its simplest form the Infrastructure Architecture supporting applications scale either horizontally or vertically. Then you have ‘scale-out’ block, file and object workloads as well as hybrid architectures. You will have vendors and solution providers that sometimes profess to say that their one Architecture addresses all workloads while maintaining exceptional performance. Abraham Maslow coined the phrase “if all you have is a hammer, everything looks like a nail” and this comes to fore here for such solutions providers.

Asystec have strong partnerships with EMC, VMware, VCE and Virtustream whose portfolios are leveraged for different application workloads. Taking an example of a service provider whose application workloads are high performance, high compute workloads with mass scale with high hundreds or thousands of server nodes then EMC ScaleIO would be an ideal fit. ScaleIO is software that creates a server based SAN with a highly flexible scalable shared storage architecture serving 1000’s of nodes. More and more the intelligence is in the software layer regardless of underlying hardware. The flexibility of the software to deal with changing and evolving workloads is what customers are demanding now. Not all Data is equal! Asystec are assisting Customers classifying workloads and ensuring the platform suits these workloads. With some workloads being placed in the cloud e.g. SAP Applications managed by Virtustream and others like OLTP databases residing on premises designated to run on All Flash arrays like EMC’S XtremIO which gives the sub millisecond consistent performance.

CIO’s and Infrastructure Managers should look at their business case based on their workloads scale in terms of performance, capacity and types of IO workloads requesting the solution provider to profile this on their proposed target architecture ensuring the highest levels of return on investment.

 

 

Purchasing cycle
“Companies are finding that certain storage platforms deliver application specific benefits that commercially and technically trump multi-purpose arrays to such an extent that they can no longer be ignored” Eoin Johnston_Arkphire_web

Arkphire Eoin Johnston, chief technical architect

Companies need to shorten the storage purchasing cycle!

The traditional approach of a single large storage investment every 3 to 7 years is no longer compatible with the agile demands of today’s businesses. Many companies find that they are out of storage space 2.5 years into a 5-year contract. This is because their actual growth was 45-50%, not the 20% on which the purchase was modelled. That being said, would the same company have purchased an additional 400% capacity up front on day one if they thought they would have had 50% year on year growth over those 5 years?

The IT industry is changing, and where the VMAX’s, 3PAR’s and XIV’s of this world once promised all conquering storage consolidation, the emergence of dev/ops methodologies and cloud applications are making upgrades to all-purpose arrays financially infeasible for platform 3 style agile workloads. Companies are finding that certain storage platforms deliver application specific benefits that commercially and technically trump multi-purpose arrays to such an extent that they can no longer be ignored. Technologies such as in-line de-duplication, all-flash arrays and hyper-converged systems deliver the flexibility and web scale environments that platform 3 applications require.

In short, the fluid nature of today’s IT industry means the idea of investing in a multi-year technology contract must be examined in order to avoid future compromises and disappointments.

Plan for a storage strategy review every 18-24 months, and choose the best technology affordable at that time. Flexibility is the name of the game, and storage is no exception!

 

 

Tailored solutions
“Scalability and manageability are also important considerations, the system needs to be scalable not only from a capacity perspective but perhaps also from a functionality perspective” Gavin Lockhart, Senior Consultant Engineer, Ergo.

Ergo Gavin Lockhart, senior consultant

In Ergo when we design a storage solution for a client we determine the client’s requirements and we structure our proposals to meet their needs within a pre-defined budget.

There are six key elements we must understand to ensure that the client receives a solution that is fit for purpose. The first of these is capacity, we need to understand how much data a client has, what type of data it is and how much this is likely to grow over the life-cycle of the storage solution.The resilience and reliability of the storage system is also a critical part of the design. All systems rely on components that will eventually fail, it is important to design a solution that can withstand these failures. At the most basic level RAID arrays can ensure data is not lost in the event of disk failures. At the other end of the scale, SAN clusters could be created to enable high availability or disaster recovery solutions where entire LUNS are replicated between SANS spread over locations.

Performance is also key and links closely with capacity. It’s important to understand what workloads require what levels of performance. These tiers of storage range from Tier 4 Offline (Tape/SATA disk for backups or archival data) to Tier 0 extremely fast transactional data (SSD or In Computing).Scalability and manageability are also important considerations, the system needs to be scalable not only from a capacity perspective but perhaps also from a functionality perspective (SAN replication may not be needed initially but may be needed in the future) and once the system is in place it needs to be maintained.

All of these factors have obvious cost implications and it’s important for organisations to offset the cost of their preferred storage solution against costs incurred due to system outages.

Ergo have a suite of services and solutions to augment the enterprise, to deliver on the promise of determining a storage solution to suit your organisation. We will ensure your success, realise storage economics and deliver on the promise.

 

Read More:


Back to Top ↑

TechCentral.ie